Hyper Threading is Intel’s implementation of simultaneous multithreading on Pentium 4 processors. It allows multiple threads to execute at the same time in one processor. Hyper Threading was first announced in the fall of 2001 and made available in early 2002. Since then it has become widely popular on desktop PCs. According to Intel, Hyper Threading increases speed by 30% over an identical processor without it.
Originally codenamed ‘Jackson’, Hyper threading was fist announced at the annual Intel Developer Forum in 2001. Intel was not, however, the first company to develop simultaneous multithreading. In 1999, at the Microprocessor Forum in San Jose CA, Compaq announced it had achieved just that with its EV-8 Alpha processor. Unfortunately, the project was terminated prematurely and the processor was never made available. The technology was brought back and improved when Intel introduced it in their Xeon line of processors in 2002. In November of 2002, Hyper Threading was brought to the desktop PC market. The 3.06-gigahertz (GHz) Pentium 4 was the first of its kind to support hyper threading.
To understand hyper threading you must first understand the basics of how a processor works. A diagram of a very basic CPU can be found in Appendix A. For example, let us use a program that will add 7 and 10, and store the result in the accumulator.
MVI A, 7
The program is stored in RAM. In hexadecimal code, it looks like 3E, 07, C6, 0A, 76. First, the CPU fetches the first instruction, 3E, and stores it in a data register. The instruction decoder then decodes it. It recognizes it as a move immediate instruction and moves the next value, 07, into the accumulator. Each time a value is fetched from RAM, the program counter is incremented to point to the next instruction to fetch. Next, C6, the next instruction, is fetched. It is recognized as an 'add immediate' instruction. The controller sequence then tells the arithmetic logic unit (ALU) to add the next instruction to whatever is in the accumulator. The next instruction is fetched, 0A, and is added to the 07 that is already in the accumulator, resulting in 11 hexadecimal, or 17 base 10. The next instruction is fetched, 76, which tells the program to stop.
Modern processors are much more complicated and have many more registers that the one used in the above example. A diagram of an Intel Pentium 4 processor can be found in Appendix B. You can see the differences between a basic CPU and
The diagram on the right represents a single threaded processor. The colored boxes in RAM signify different threads waiting to be executed. The ‘front end’ section inside the CPU is where instructions are fetched, decoded, and re-ordered. The ‘Execution Core’ is where the instructions are executed.
With this type of processor, only one thread may be executing at once, represented by the red blocks in the CPU.
Also, notice the empty blocks. These blocks are where the CPU was unable to any useful work, called pipeline bubbles. There are many reasons why this happens, including instructions decoded improperly or threads not ready to be executed. These empty spaces are not recoverable and will remain through the execution of the process.
Single threaded SMP
One solution to speed up execution is to have multiple processors. For each processor we have, another thread can execute at the same time. This is called Symmetric Multiprocessing (SMP).
In the diagram above, each CPU can access RAM and is executing a different thread. The biggest problem with this solution is the amount of empty boxes, or pipeline bubbles. While adding more CPUs increases performance, it does not improve efficiency.
To help alleviate the problem of the pipeline bubbles, a CPU must be able to execute more than one thread at once, said to be a multithreaded CPU. One method of doing this is called super threading.
Super threaded CPU
he diagram on the right illustrates this technique. First, notice that there are fewer pipeline bubbles. Right away this has improves the efficiency of the processor. Also, notice the arrows to the left of the diagram. These arrows emphasize how the processor can mix instructions from different threads. Each processor pipeline can only hold instructions from one thread. The CPU, however, can execute multiple pipelines each clock cycle. This allows multiple threads to execute with each CPU clock cycle.
Hyper Threading, or Simultaneous Multithreading (SMT), takes this idea even further. It allows instructions from threads to be on the same pipeline as one another. This minimizes the amount bubbles and maximizes the CPU efficiency.
This is the biggest strength of Hyper Threading. It allows for one CPU to do the same work as two CPUs with greater efficiency. To achieve this, a hyper threaded CPU is divided into two logical CPUs. Each logical CPU has it’s own arcetectural state which includes some general purpose registers, control registers, the program counter, the advanced programmable interrupt controller (APIC), and some machine state registers. Other resources, such as cache, control logic and buses, are shared by the two logical processors. Once the arcetectural state is duplicated, the operating system now sees two processors.
The operating system can schedule processes on both logical processors as if they were two physical processors. This can greatly increase performance, up to thirty percent, according to Intel. Many people have tested hyper threading technology on their own and come to their own conclusions. I, too, have done my own tests.
For the tests, I used my current PC. Complete specifications of the test computer can be found in Appendix C. To perform the tests, I used PCMark 2004 v1.2. First, I restarted the PC and changed the BIOS configuration to disable hyperthreading. The PC then started up. I then stopped all processes that run automatically on startup. This left a total of 23 processes running that are part of Windows XP. I then ran the testing software. The same procedure was used for testing with hyperthreading enabled. Both tests were performed twice on different days.
After the first round of testing, there was an overall improvement of 12.2% with hyper threading enabled. More specifically, there was a 16.8% improvement in the CPU category, according to PCMark. The second round of testing showed even greater results with an overall 13% improvement with hyper threading and 18.5% improvement in the CPU category. Complete results can be found in Appendix D. While 13% is good, it’s clearly not the 30% that Intel claims. Perhaps the biggest performance improvement is when a user multitasks. According to some, increases up to 47% can be seen when running two applications such as a virus scan and video encoder.
Since hyper threading became availaable in 2002, it has become increasingly popular among home PC users. It’s use of effective technology increases performance which benefits the home user the most. Since it was incorporated into the Pentium 4 processors, the product line has grown to include processors from 2.8 GHz up to 3.8 GHz.
Hinton, Glenn. Dave Sager, Mike Upton, Darrell Boggs, Doug Carmean, Alan Kyker, Patrice Roussel,. “The Microarchitecture of the Pentium® 4 Processor” <http://developer.intel.com/technology/itj/q12001/articles/art_2.htm> Intel.