HP 226824-001 - ProLiant - ML750 Обзор - Страница 15

Просмотреть онлайн или скачать pdf Обзор для Настольный компьютер HP 226824-001 - ProLiant - ML750. HP 226824-001 - ProLiant - ML750 20 страниц. Visualization and acceleration in hp proliant servers
Также для HP 226824-001 - ProLiant - ML750: Часто задаваемые вопросы (4 страниц), Руководство по внедрению (35 страниц), Техническая белая книга (12 страниц), Обновление прошивки (9 страниц), Руководство по внедрению (26 страниц), Вводное пособие (22 страниц), Руководство по устранению неполадок (18 страниц), Руководство по внедрению (11 страниц), Руководство по установке (2 страниц), Руководство по конфигурации (2 страниц), Вводное пособие (19 страниц), Руководство по обновлению (9 страниц), Руководство по обновлению (16 страниц), Вводное пособие (12 страниц), Вводное пособие (10 страниц), Краткое описание технологии (9 страниц)

HP 226824-001 - ProLiant - ML750 Обзор

Fully-Buffered DIMMs

Traditional DIMM architectures use a stub-bus topology with parallel branches (stubs) that connect to
a shared memory bus (Figure 13). Each DIMM connects to the data bus using a set of pin connectors.
In order for the electrical signals from the memory controller to reach the DIMM bus-pin connections at
the same time, all the traces have to be the same length. This can result in circuitous traces on the
motherboard between the memory controller and memory slots. Both the latency resulting from
complex routing of traces and signal degradation at the bus-pin connections cause the error rate to
increase as the bus speed increases.
An impedance discontinuity is created at each stub-bus connection.
Figure 13. Stub-bus topology.
Each stub-bus connection creates an impedance discontinuity that negatively affects signal integrity. In
addition, each DIMM creates an electrical load on the bus. The electrical load accumulates as DIMMs
are added. These factors decrease the number DIMMs per channel that can be supported as the bus
speed increases. For example, Figure 14 shows the number of loads supported per channel at data
rates ranging from PC 100 to DDR-3 1600. Note that the number of supported loads drops from eight
to two as data rates increase to DDR2 800.
Figure 14. Maximum number of loads per channel based on DRAM data rate.
Increasing the number of channels to compensate for the drop in capacity per channel was not a
viable option due to increased cost and board complexity. System designers had two options: limit
memory capacity so that fewer errors occur at higher speeds, or use slower bus speeds and increase
the DRAM density. For future generations of high-performance servers, neither option was acceptable.
15