Instructions per second (IPS) is a measure of a computer's processor speed. For CISC computers different instructions take different amounts off time, so the value measuered depends on the instruction mix; even for comparing processors in the same family the IPS measurement can be problematical.. Many reported IPS values have represented "peak" execution rates on artificial instruction sequences with few branches and no cache contention, whereas realistic workloads typically lead to significantly lower IPS values. Memory hierarchy also greatly affects processor performance, an issue barely considered in IPS calculations. Because of these problems, synthetic benchmarks such as Dhrystone are now generally used to estimate computer performance in commonly used applications, and raw IPS has fallen into disuse.
The term is commonly used in association with a numeric value such as thousand/kilo instructions per second (TIPS/KIPS), million instructions per second (MIPS), and billion instructions per second (GIPS).
Video Instructions per second
Computing
IPS can be calculated using this equation:
However, the instructions/cycle measurement depends on the instruction sequence, the data and external factors.
IPs per cycle
See "Instructions per cycle" (IPC for various processors).
Maps Instructions per second
Thousand instructions per second (TIPS/KIPS)
Before standard benchmarks were available, average speed rating of computers was based on calculations for a mix of instructions with the results given in kilo Instructions Per Second (kIPS). The most famous was the Gibson Mix, produced by Jack Clark Gibson of IBM for scientific applications. Other ratings, such as the ADP mix which does not include floating point operations, were produced for commercial applications. The thousand instructions per second (kIPS) unit is rarely used today, as most current microprocessors can execute at least a million instructions per second.
Millions of instructions per second (MIPS)
The speed of a given CPU depends on many factors, such as the type of instructions being executed, the execution order and the presence of branch instructions (problematic in CPU pipelines). CPU instruction rates are different from clock frequencies, usually reported in Hz, as each instruction may require several clock cycles to complete or the processor may be capable of executing multiple independent instructions simultaneously. MIPS can be useful when comparing performance between processors made with similar architecture (e.g. Microchip branded microcontrollers), but they are difficult to compare between differing CPU architectures. This led to the term "Meaningless Indices of Performance" being popular amongst technical people by the mid-1980s.
For this reason, MIPS has become not a measure of instruction execution speed, but task performance speed compared to a reference. In the late 1970s, minicomputer performance was compared using VAX MIPS, where computers were measured on a task and their performance rated against the VAX 11/780 that was marketed as a 1 MIPS machine. (The measure was also known as the VAX Unit of Performance or VUP.) This was chosen because the 11/780 was roughly equivalent in performance to an IBM System/370 model 158-3, which was commonly accepted in the computing industry as running at 1 MIPS.
Many minicomputer performance claims were based on the Fortran version of the Whetstone benchmark, giving Millions of Whetstone Instructions Per Second (MWIPS). The VAX 11/780 with FPA (1977) runs at 1.02 MWIPS.
Effective MIPS speeds are highly dependent on the programming language used. The Whetstone Report has a table showing MWIPS speeds of PCs via early interpreters and compilers up to modern languages. The first PC compiler was for BASIC (1982) when a 4.8 MHz 8088/87 CPU obtained 0.01 MWIPS. Results on a 2.4 GHz Intel Core 2 Duo (1 CPU 2007) vary from 9.7 MWIPS using BASIC Interpreter, 59 MWIPS via BASIC Compiler, 347 MWIPS using 1987 Fortran, 1,534 MWIPS through HTML/Java to 2,403 MWIPS using a modern C/C++ compiler.
For the most early 8-bit and 16-bit microprocessors, performance was measured in thousand instructions per second (1000 KIPS = 1 MIPS).
zMIPS refers to the MIPS measure used internally by IBM to rate its mainframe servers (zSeries, IBM System z9, and IBM System z10).
Weighted million operations per second (WMOPS) is a similar measurement, used for audio codecs.
Timeline of instructions per second
See also
- TOP500
- FLOPS - floating-point operations per second
- SUPS
- Benchmark (computing)
- BogoMips (measurement of CPU speed made by the Linux kernel)
- Instructions per cycle
- Cycles per instruction
- Dhrystone (benchmark) - DMIPS integer benchmark
- Whetstone (benchmark) - floating-point benchmark
- Million service units (MSU)
- Orders of magnitude (computing)
- Performance per watt
References
Source of the article : Wikipedia