Benchmarks can be run in virtually any type of system environment, including
batch and online job streams, and with the users linked to the system directly or through telecommunications method. Common benchmarks are the speed of the central processor, with typical instruction executed in a set of programs, as well as multiple streams of jobs in a multiprogramming environment. The same benchmark run on several different computers will make apparent any speed and performance differences attributable to the central processor. Benchmarks also can be centered around an expected language mix for
the programs that will be run, a mix of different types of programs, and applications having widely varying input and output volumes and requirements. Their response time for sending and receiving data from terminals is an additional benchmark for the comparison of systems. Sometimes, rather than running actual benchmark
jobs on computer systems, systems simulators are used to determine performance differences. In commercial systems simulators, the workload of a system is defined in terms of, say, how many input and
output operations there are, how many instructions are utilized in a computation, and the order in which work is processed. The specifications are fed into a simulator that stores data about the characteristics of particular equipment (such as instruction speed,
channel capacity, and read – write times. The simulator in turn processes the data against the operating characteristics and prepares a report of the expected results as if the actual computer were used. Then the system characteristics can be changed to mimic another model of computer and anew set of performance data produced for comparison. The time and expense of running actual benchmark programs on a computer are of concern to analyst and vendor alike. Thus, the use of commercial simulators is an attractive alternative.