The key to effective data center management is seeing the complete picture and knowing what you have, where everything is and how it’s all connected. This principle sounds simple enough, but putting it into practice and making it work in your own enterprise is another matter. Implementation raises a multitude of important questions for data center planners. How important are asset management tools and practices in modern data centers? Where does asset management fit with overall system management schemes? What are the problems or limitations with asset management tools or processes today? What’s the ROI or other payoff, and is it worth the implementation trouble? And where is data center asset management headed in the future? We turned to the SearchDataCenter.com advisory board to help answer these questions and give insight into the asset management market.
Matt Stansberry, director of content and publications, Uptime Institute
I spoke with David Humphrey and Rich Van Loo, Uptime Institute Professional Services consultants, who both survived asset management software implementations in their previous jobs as data center managers. They said few people are ever happy with these tools.
Asset management tools are often part of some other system -- maintenance management software, a systems management framework, real estate management and building management -- not a tool purpose-built for the data center. So they take a lot of effort to implement. With data center infrastructure management (DCIM) tools, vendors are getting closer to providing you with a complete picture of what you need to track, but they're not quite there yet.
Humphrey and Van Loo said asset management software vendors can come in and show you all the reports their tools can do, but they don't tell you it can take five man years of effort to populate the reports. A lot of data center managers haven't sat down and figured out what information they want out of the systems. How are you using the reports? What's the benefit to your operation? Why are you generating this data if you're not going to use it?
Humphrey and Van Loo recommended that data center managers start with an Excel spreadsheet for a year or two so the operations staff can figure out exactly what data they want to keep, and then they can move to one of the software tools if necessary.
Bill Bradford, Unix/Solaris expert and freelancer
A modern data center without some sort of asset management, database or other tracking system is a mess. You need to be able to know exactly where a system is, how it's connected to the rest of the network, what switch ports it's using and so forth. The best asset management tools that I've seen have been homemade; commercial offerings I've had to work with have been bloated and slow with ridiculous system requirements and price tags.
The most important thing about an asset management system is ease-of-use. People shouldn't have to fight with the system to use it -- that just leads to non-compliance, systems not being put in the inventory or database properly and a mess down the road when you end up with systems that have no records.
I would love to see some sort of industry standard for off-the-shelf servers where a barcode label could be scanned that listed a system's manufacturer, serial number, system build ID/service tag, Ethernet MAC addresses and possibly a short configuration summary.
Robert Crawford, lead systems programmer and mainframe columnist
Asset management is very relevant in the mainframe world where there are fewer, more expensive pieces [of equipment to keep track of]. Knowing the software inventory is especially important, as a single processor upgrade can end up costing hundreds of thousands of dollars. Asset management also plays into other system management disciplines, such as change control and problem management (ticket systems). Knowing what you have on the floor helps shops get to the root cause and measure outage pain. An asset inventory rolled into a solid configuration management database (CMDB) makes ascertaining the scope and impact of change much easier and automatic.
Most current asset management tools rely on manual data entry. With this limitation, the inventory is only as good as the last person who may (or may not) have remembered to update it. There are tools with "auto-discovery" features that work well enough for distributed platforms, but always seem to fall short on the mainframe. I think the biggest ROI we've seen is from an RYO tool that helps us predict how much our software costs may increase for a hardware upgrade.
The ideal asset management tool would be able to roll through a system, distributed or mainframe, build a list of assets and understand the relationships between them. The information in the asset inventory should be exportable to a CMDB for a disciplined change management process. It would be nice to include lifecycle management that flags any asset that may become obsolete. In addition, I’d like to see a means of detecting assets that have overlapping functionality so product owners can get rid of redundant tools, along with some sort of predictive cost projections based on configuration changes.
Michael Coté, analyst, RedMonk
Asset management is just as important as ever. The need to know what you have and track it for management, expense and other legal and regulation reasons has not gone away. The only slight change is that you might be tracking people's own assets -- if they're bringing in personal devices like iPhones and iPads [into corporate networks] -- to guard against liability or compliance issues.
The better asset management systems I see act as the more useful and economically viable CMDB -- the huge database of all the "stuff" IT has to care about. The better ones do as much automatic discovery and updating as possible, keeping things relatively simple. I continue to like the approach Spiceworks takes, which essentially treats asset management as the core functionality that everything else builds from.
The problem with [a lot of] tools is that they're "boring" systems that no one wants to use. But they get used as a foundation to make important IT decisions. Every piece of IT management software known to man has promised to automate things like discovery and updating, but it's still a difficult task. And the old challenge of integrating with the rest of your tools and process [still] exists.
Today's challenges also include profiling cloud-based services (e.g., storage or Salesforce.com accounts) and getting the automation code to "discover" and update those services. [In the cloud services sector], the value [of asset management tools] is pretty high, as you'll want to keep track of how much cloud resources you're consuming and (hopefully) cut them down if they're unused. Just like we have over-capacity buying in on-premise, over-capacity buying in cloud will be (and probably is) running rampant.
What I'd really like to see more of is aggregating data about assets among users -- sort of like automated reviews on Amazon -- collected data about assets across thousands of data centers and aggregated analysis on it. For example, how many times does this particular network router need to be fixed? What are the typical network speeds that a given Internet provider gives you and 50 of your peers in the region? For some reason, companies don't share this sort of detail, making it ultra-valuable (and expensive) to those who get it. Establishing systems to pool these “benchmarks” would be really helpful for IT buyers and managers.
The truth is out there
Asset management tools are a huge part of any modern enterprise data center -- a cornerstone of systems management and perhaps the only practical way to keep a handle on hardware and software in production. But data center asset management cannot be taken lightly. The tools are challenging, expensive and can be improved upon. Organizations should approach asset management the same way as any other major technology -- with skeptical anticipation and an extensive evaluation period.
Stephen J. Bigelow, a senior technology editor in the Data Center and Virtualization Media Group at TechTarget, has more than 15 years of technical writing experience in the PC/technology industry. He holds a bachelor of science in electrical engineering, along with CompTIA A+, Network+, Security+ and Server+ certifications, and has written hundreds of articles and more than 15 feature books on computer troubleshooting, including Bigelow's PC Hardware Desk Reference and Bigelow's PC Hardware Annoyances. Write to him at [email protected]