alphaspirit - Fotolia
Monitoring tools flood operations teams with numbers and stats that can overwhelm even the most seasoned IT professional. In these cases, the data collection process drowns out what that data actually communicates about an IT environment.
Reactive vs. proactive monitoring
Reactive data blooms from monitoring tools that collect real-time statistics and values. This data ranges from CPU or memory usage to service availability. Reactive monitoring tools and dashboards are critical when the operations staff needs quick insight into a large environment to locate issues and decide how to address them. Tools such as Quest and SolarWinds enable ops teams to evaluate, and react to, workload performance in near-real time. These tools become like the heartbeat of an IT environment, with 24/7 monitoring.
While reactive monitoring tools are fairly easily to implement and maintain, they're most useful after an issue has actually occurred. They capture real-time data and often store that data to help IT teams spot trends -- but this does not qualify as proactive monitoring. Instead, reactive tools that support historical trend analysis sometimes serve as data repositories for more proactive tools.
Proactive monitoring combines reactive monitoring tools with other capabilities to help IT teams prevent issues before they occur. For example, proactive monitoring could enable ops teams to spot and address a memory leak before it causes an application or server to crash.
The difference between reactive and proactive monitoring tools is that the former must be triggered by an event, or issue, before it prompts the IT team to take action. Proactive monitoring tools, on the other hand, enable teams to uncover data abnormalities that push them to act without a triggering event. As a result, the proactive model is ideal for modern data centers and IT environments that can't afford any downtime.
The challenge of being proactive
Even though proactive monitoring tools let IT teams address issues before they affect the customer, several elements discourage more companies from using them.
A true proactive monitoring tool gathers data from many sources to create a complete picture of the IT environment; data from only one or two sources isn't enough to avert problems. This poses the biggest challenge with these kinds of monitoring tools: They require high volumes of data. Simple performance stats won't suffice. Instead, these tools demand data from a range of sources, such as application and OS logs, network traffic and customer experience. While this might seem excessive, the more data points and sources that are accessible, the more accurate proactive monitoring tools will be.
Data collection isn't the only challenge with proactive monitoring tools -- that data must be colocated into something that an admin can understand. This complex process typically involves machine learning and AI to analyze and find relationships in data, which means it doesn't tend to occur in real time. In a large IT environment, proactive monitoring tools might take hours or even days to complete a task. To support these complex analytics processes, proactive monitoring tools also require more resources -- as well as a bigger budget to support those resources -- compared to reactive monitoring tools. As a result, they are often hosted in the cloud and delivered as a SaaS application, rather than in-house.
Several proactive monitoring tools, such as SolarWinds' Loggly and LogicMonitor, are available in a cloud-based, rather than on-premises, version to more efficiently scale and make use of underlying resources. Other examples include Dynatrace and Datadog, though those are catered toward application monitoring, specifically.
Mind the gaps
Each of these tools has a place in the enterprise, both in terms of function and cost. Most companies need reactive monitoring to prevent monitoring gaps and continually track their environments. These tools are often less expensive and can be built in-house. Conversely, proactive monitoring tools run less often, as they focus on larger trends and abnormalities, but typically cost more, since they're often hosted off-site by a third party.
Ultimately, IT teams can use reactive monitoring to keep the lights on day to day and also use proactive monitoring to anticipate what's to come.