Does DCIM Deployment need to be as complex as it sometimes seem?
While at a recent data center conference I was struck by the number of conversations I had with data center managers who had looked at a couple of DCIM solutions and decided DCIM was ‘too hard’. This always frustrates me. Enterprises typically track their infrastructure in half a dozen or more spreadsheets and internal databases, which I know from personal experience is really (really) hard to do well. but these managers have still concluded DCIM is too complicated for their site/organization. There is something wrong with this picture. How can we have a situation where it appears easier to use multiple different systems with all their problems of data inaccuracy, data silos, lack of history, etc. vs. having a consolidated set of data that can be used for capacity and change management and all the good analysis you can do with a joined-up data set?
Is the issue that some DCIM solutions are now so complicated you need a “DCIM degree” to operate them? Perhaps the issue is the industry messaging, where we sometimes hear “it’s not DCIM, unless you do all these ten things day one”. I’m not sure of the reason, but I am positive that “it’s too complicated” is not the conclusion the DCIM industry should be driving people to. As the Cormant CTO once said to me “the hardest part of software development is making our application easy to use.”
As well as avoiding unnecessary complexity, it’s important to realize that DCIM is a journey, not a one-time activity. What do I mean by that? Well take a customer we have worked with for a few years, they started out with just equipment, racks and locations, this gave them the ability to manage rack capacity and plan changes and, by tracking equipment, reduce cost and time to repair. After bedding in their processes, the customer decided to add connectivity management so they could track redundant power and data paths. More recently they are gathering real-time power and environmental data from CRAC to rack, and are about to embark on a linkage to their CMDB. Interestingly the CMDB group now wants to pull data from the Cormant DCIM tool as they recognize that’s the most accurate physical data – this was not the case at the start of the company DCIM journey. I’m not saying that the order this customer approached their DCIM journey is the ‘right’ journey for everyone, far from it, just that the point is they chose to fix a set of – the most pressing – problems first and then as needed, work on the further improvements, and so on.
Another part of the journey that I think is very necessary is opening up the DCIM data to as many users as possible, in fact we go so far as to ask why there should not be a link that anyone in the organization can use to see what’s in the data center (with appropriate security of course). This sort of imitative shows a huge amount of confidence in the data quality, reduces queries to the data center teams and demonstrates just how valuable good, simple to use, DCIM really can be to an organization.
I imagine some people will read this and say “what about analytics, what about modelling, what about 3D, what about connectors?”, they are all potentially important to your journey. But if you get blinded by flashy features and don’t start solving a core set of problems – instead of trying to do everything at once – your DCIM implementation will cost far more than you expected and deliver less than you wanted.
– By: Paul Goodison