When it comes to benchmarking a system, one of the biggest problems that you are likely to encounter is the fact that your benchmarking data will never be 100% accurate. Benchmarking tools are applications, and like any other application these tools consume system resources as they operate. Therefore, the benchmarking data that is presented to you is skewed by the benchmarking tool's overhead. This means that there is a negative correlation between the resources consumed by your benchmarking software and the accuracy of the data that it provides. In other words, minimizing the benchmarking software's use of system resources improves the accuracy of the benchmarking data.
So what can you do to reduce your benchmarking software's overhead? Maybe there is nothing you can do. After all, every benchmarking tool is different, and some tools allow for a higher degree of control than others. In most cases, though, you will likely find that you can improve benchmarking accuracy by making a few simple adjustments.
Benchmark performance and sampling frequency
One of the most effective techniques for reducing the load placed on a system by benchmarking software is to adjust the sampling frequency. Most benchmarking software does not report performance data in real time. Instead, it checks various aspects of system performance on a periodic basis. Imagine, for example, what would happen to your system's performance if your benchmarking software polled the CPU for performance data every few milliseconds. In this type of situation, the benchmarking software would likely place a huge burden on the CPU. The end result would probably be greatly diminished performance and a benchmarking report indicating that the system's CPU is completely inadequate for the server's workload, when in reality the server's normal workload is nowhere near the reported level.
While this is an extreme example, it shows how the benchmarking data can be skewed by the demands that the benchmarking software places on the system. You can decrease the load on the system by decreasing the sampling frequency (sample less frequently). For example, if your benchmarking software polled the CPU for performance data once every 20 seconds, the effect would hardly be noticeable. This is a very stark contrast to what would likely happen if the CPU were polled every few milliseconds.
But the choice of sampling frequency implies an important tradeoff. Lowering the sampling frequency decreases the load that the benchmarking software places on the system, which in turn provides a higher degree of accuracy. It is important to realize, however, that the reported data only reflects how the system was performing at the exact moment that the data was captured.
This is a very important distinction because system performance tends to fluctuate. For example, on a healthy system, the average CPU utilization should remain below about 80%. Even so, it is normal for the CPU utilization to occasionally spike to 100%. With that in mind, imagine that you adjusted your sampling frequency to examine CPU performance once an hour, and that the polling just happened to occur during one of the usage spikes. If that happened, the report would indicate that the CPU's usage was much higher than it really is on average.
Determining the optimal sampling frequency is something of an art form. If you sample performance data too often, you can put so much of a load on the system that your results become completely inaccurate. On the other hand, if you don't sample the performance data often enough, the data may be accurate, but it may not reflect the system's true performance. The key to adjusting the sampling frequency is to collect a large enough data set to be representative, but without overwhelming the system. Being that every benchmarking tool is different, you will probably have to experiment with the tool to find the optimal sampling frequency. The software's publisher may also provide recommendations that you can follow.
Benchmark performance and excessive counters
Another thing that you can do to minimize the benchmarking software's impact on system performance is to limit your benchmark data collection to what is really important. For example, the Windows Performance Monitor contains thousands of different counters, each of which reflects one individual aspect of system performance. As such, it might be tempting to monitor all of the available counters so that you can get a complete picture of what is going on with the system.
For every counter that you use, however, the load on the system increases. Monitoring two or three counters may not place a noticeable load on the system, but you could completely overwhelm the server if you try to monitor all of the available counters.
So how do you know which counters to monitor? The answer is easier than you might expect. Most of the counters that Microsoft provides have very little value in day-to-day performance monitoring. Such counters exist primarily for troubleshooting situations. Therefore, your best bet is to stick to the basics since those counters will be the most relevant to day-to-day benchmarking.
Granted, my example focused specifically on the Windows Performance Monitor, but some third-party benchmarking tools use Performance Monitor counters under the hood. Even if the benchmarking product that you use does not use Performance Monitor counters, the underlying concept of getting better results by minimizing the number of aspects of system performance that you are monitoring still holds true.
Benchmark performance and data storage
Although I have seen a few benchmarking tools that only report the current performance data, most of the tools on the market log the performance data as it is collected. That way, you can analyze performance data over long periods of time to spot server performance trends. If the benchmarking product that you are using logs performance data, then you may be able to improve the accuracy of your data by carefully deciding where the data should be stored.
Of course, it is a lot easier to analyze benchmarking data if you have centrally collected it on a single server rather than storing each server's performance data locally. Because of this, you might have to make some decisions as to what is more important to you -- accuracy or convenience.
Think about benchmark overhead
Unfortunately, benchmarking data will never be 100% accurate. The benchmarking tool's overhead always causes data to be skewed to at least some degree. However, you can minimize the skew by minimizing the load that your benchmarking software places on the system.
What did you think of this feature? Write to SearchDataCenter.com's Matt Stansberry about your data center concerns at firstname.lastname@example.org.