One of the most serious problems encountered with benchmarks is the integrity of their numbers. You’ve probably heard that statistics can lie and the same thing is true of benchmarks. In order for benchmarks to provide you with reliable results, you
must take some precautions - Note the complete system configuration When you run a benchmark and achieve a result, be sure to note the entire system configuration (i.e., CPU, RAM, cache,
OS version, etc.
- Run the same benchmark on every system Benchmarks are still software, and the way in which benchmark code is written can impact the way it produces results on a given computer. Often, two different versions of the same benchmark will yield two different results. When you use benchmarks for
comparisons between systems, be sure to use the same program and version number.
- Minimize hardware differences between hardware platforms A computer is an assembly of many interdependent sub-assemblies (i.e., motherboard,
drive controllers, drives, CPU, etc, but when a benchmark is run to compare a difference between systems, that difference can be masked by other elements in the system. For example, suppose you’re using a benchmark to test the hard-drive data transfer on two systems. Different hard drives and drive controllers will yield different results (that’s expected. However, even if you’re using identical
drives and controllers, other differences between the systems (such as BIOS versions, TSRs,
OS differences, or motherboard chipsets) can also influence different results.
- Run the benchmarks under the same load The results generated by a benchmark do not guarantee that same level of performance under “real-world” applications. This was one of the flaws of early computer benchmarking—small, tightly written benchmark code resulted in artificially high performance, but the system still performed poorly when real applications were used. Use benchmarks that make use of (or simulate)
actual programs, or otherwise simulate your true workload.
Share with your friends: