HP 226824-001 - ProLiant - ML750 Overview - Page 15

Browse online or download pdf Overview for Desktop HP 226824-001 - ProLiant - ML750. HP 226824-001 - ProLiant - ML750 20 pages. Visualization and acceleration in hp proliant servers
Also for HP 226824-001 - ProLiant - ML750: Frequently Asked Questions (4 pages), Implementation Manual (35 pages), Technical White Paper (12 pages), Firmware Update (9 pages), Implementation Manual (26 pages), Introduction Manual (22 pages), Troubleshooting Manual (18 pages), Implementation Manual (11 pages), Installation Manual (2 pages), Configuration Manual (2 pages), Introduction Manual (19 pages), Update Manual (9 pages), Update Manual (16 pages), Introduction Manual (12 pages), Introduction Manual (10 pages), Technology Brief (9 pages)

HP 226824-001 - ProLiant - ML750 Overview

Fully-Buffered DIMMs

Traditional DIMM architectures use a stub-bus topology with parallel branches (stubs) that connect to
a shared memory bus (Figure 13). Each DIMM connects to the data bus using a set of pin connectors.
In order for the electrical signals from the memory controller to reach the DIMM bus-pin connections at
the same time, all the traces have to be the same length. This can result in circuitous traces on the
motherboard between the memory controller and memory slots. Both the latency resulting from
complex routing of traces and signal degradation at the bus-pin connections cause the error rate to
increase as the bus speed increases.
An impedance discontinuity is created at each stub-bus connection.
Figure 13. Stub-bus topology.
Each stub-bus connection creates an impedance discontinuity that negatively affects signal integrity. In
addition, each DIMM creates an electrical load on the bus. The electrical load accumulates as DIMMs
are added. These factors decrease the number DIMMs per channel that can be supported as the bus
speed increases. For example, Figure 14 shows the number of loads supported per channel at data
rates ranging from PC 100 to DDR-3 1600. Note that the number of supported loads drops from eight
to two as data rates increase to DDR2 800.
Figure 14. Maximum number of loads per channel based on DRAM data rate.
Increasing the number of channels to compensate for the drop in capacity per channel was not a
viable option due to increased cost and board complexity. System designers had two options: limit
memory capacity so that fewer errors occur at higher speeds, or use slower bus speeds and increase
the DRAM density. For future generations of high-performance servers, neither option was acceptable.
15