Amid a sea of DevOps monitoring tools and data, the true challenge is to know the right questions to ask.
As DevOps develops, creating and administering app delivery pipelines is the primary focus. However, an equally crucial part of establishing a DevOps culture is to provide fast feedback to developers on how their applications perform in production.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
There's no shortage of tools that provide feedback -- from metrics monitoring utilities, such as Prometheus, New Relic, Datadog and Sysdig, to a multitude of log monitoring tools, including products from Sumo Logic, Loggly and Splunk, among many others.
But what users ultimately need, according to DevOps experts, is to extract useful information from the data supplied by DevOps monitoring tools.
"It can be a bit of a challenge going from a customer request to how a developer needs to fix something," said Chris Moyer, vice president of technology with ACI Information Group, a web content aggregator based in New York, and a TechTarget contributor.
One problem is only a small percentage of customers actually complain or give any kind of feedback. Another is there can be a big gap between those scattered customer complaints and the vast array of monitoring data available to the DevOps organization.
"The big thing is, with developers, you need something actionable, not something very generic like, 'The site is slow,'" Moyer added. "We're in a distributed system, and that could be for any number of reasons -- tracking down exactly what's going on can be complicated."
'An art and a science'
New Relic and Datadog are among the more popular metrics-based tools for DevOps monitoring. Each creates customized metrics to suit an organization's specific applications. That flexibility can be a two-edged sword, however.
"It solves the tools part of the problem," said Elliot Murphy, CEO of Kindly Ops LLC, a managed DevOps service based in Portland, Maine. While his customers' choice of logging tools varies, he said they unanimously prefer Datadog to its competitors, mostly because of ease of use and out-of-the-box integrations with cloud service providers, such as Amazon, and some Linux apps, such as Celery, RabbitMQ and Postgres.
Elliot MurphyCEO of Kindly Ops
But all that flexibility "accelerates you to the business part of the problem," Murphy said, which is "a really hard conversation about, 'What's a meaningful question to ask about our users' behavior or about the performance of the business process?'"
This is more of a thought problem than one that can be solved simply by having the right tool.
"That's a whole art and a science in and of itself," Murphy said. "OK, you got people to your website and you served it to them really fast -- now what?"
For many companies, the most meaningful metric for the success of the business is user engagement, but the best indicators of user engagement can vary widely by company. For example, at a file-sharing service, the best predictor of user engagement might be whether users upload files. A company processing prescription refills will have different questions to ask about user engagement than an e-commerce company trying to get users to buy novelty items on a website.
Murphy also cautioned against monitoring for the sake of monitoring. Application performance monitoring, for example, is best used when engineers are actively working to improve the performance of an application, rather than for day-to-day troubleshooting.
"If I'm not having anybody actually work on changing the performance of the application, then I don't need that microscope," Murphy said.
Better monitoring mousetraps
In the meantime, new companies that claim a better monitoring approach are popping up.
OpsDataStore Inc., which came out of stealth last month, aims to consolidate monitoring data into a single, unified database from which it creates a topology map of each individual transaction that takes place in the infrastructure. The company's product then makes that data available to big data visualization tools, such as Tableau, and business intelligence utilities, such as Qlik.
Customers could just hook up Tableau or Qlik to existing monitoring data repositories, but without the unified database, it can be difficult to correlate data across data stores, said OpsDataStore's CEO Bernd Harzog.
Another company that looks to apply data science to DevOps monitoring data, OpsClarity Inc., also recently issued its first product and aims to combine big data analyses, such as anomaly detection, with traditional IT root-cause analysis. OpsClarity will compete with established vendors that also offer big data analysis on monitoring data, such as AppDynamics, Dynatrace, and AppNeta. OpsDataStore, meanwhile has chosen to partner with these APM vendors.
Existing log monitoring services, such as Splunk's platform, also apply data science techniques to their data stores already.
Give unified performance management a try
Build up a DevOps tool set
Troubleshoot IT infrastructure bottlenecks
BizDevOps teams up with app performance management