Its advantages are listed as improved support for multi-threaded code, allowing multiple threads to run simultaneously, improve reaction and response time, and increased number of users a server can support.
Hyper-Threading works by duplicating certain sections of the processor – those that store the architectural state[?] – but not duplicating the main execution resources. This allows a Hyper-Threading equipped processor to pretend to be two "logical" processors to the host operating system, allowing the operating system to schedule two threads or processes simultaneously. When execution resources in a non-Hyper-Threading capable processor would go idle because the processor is stalled[?], a Hyper-Threading equipped processor may use those execution resources to execute the other scheduled task. (Reasons for the processor to stall include a cache miss[?], a branch misprediction[?] and waiting for results of previous instructions before the current one can be executed.)
This innovation is transparent to operating systems and programs, however. All that is required to take advantage of Hyper-Threading is SMP support in the Operating System, as the logical processors appear as standard separate processors.
However, it is possible to optimise operating system behaviour on Hyper-Threading capable systems – such as the Linux techniques discussed in Kernel Traffic (http://kt.zork.net/kernel-traffic/kt20020902_182#21). (One such optimisation concerns a dual-processor system where both processors are capable of Hyper-Threading. The cost of moving a process from one logical processor to another on the same physical processor is almost nothing – whereas, processor affinity provides significant reasons to keep processes on the same physical processor.)
According to Intel, the first implementation only used an additional 5% of the die area over the "normal" processor.
Search Encyclopedia
|
Featured Article
|