HP 489183-B21 - InfiniBand DDR Switch Handbuch verwenden - Seite 11
Blättern Sie online oder laden Sie pdf Handbuch verwenden für Schalter HP 489183-B21 - InfiniBand DDR Switch herunter. HP 489183-B21 - InfiniBand DDR Switch 15 Seiten. Hp blc 4x ddr infiniband gen 2 switch installation guide
Auch für HP 489183-B21 - InfiniBand DDR Switch: Häufig gestellte Fragen (4 seiten), Installationshandbuch (22 seiten)
The programming model for the InfiniBand transport assumes that an application accesses at least one
Send and one Receive queue to initiate the I/O. The transport layer supports four types of data
transfers for the Send queue:
Send/Receive – Typical operation where one node sends a message and another node receives the
message
RDMA Write – Operation where one node writes data directly into a memory buffer of a remote
node
RDMA Read – Operation where one node reads data directly from a memory buffer of a remote
node
RDMA Atomics – Allows atomic update of a memory location from an HCA perspective.
The only operation available for the receive queue is Post Receive Buffer transfer, which identifies a
buffer that a client may send to or receive from using a Send or RDMA Write data transfer.
Scale-out clusters built on InfiniBand and HP technology
In the past few years, scale-out cluster computing has become a mainstream architecture for high
performance computing. As the technology becomes more mature and affordable, scale-out clusters
are being adopted in a broader market beyond HPC. HP Oracle Database Machine is one example.
The trend in this industry is toward using space- and power-efficient blade systems as building blocks
for scale-out solutions . HP BladeSystem c-Class solutions offer significant savings in power, cooling,
and data center floor space without compromising performance.
The c7000 enclosure supports up to 16 half-height or 8 full-height server blades and includes rear
mounting bays for management and interconnect components. Each server blade includes mezzanine
connectors for I/O options such as the HP 4x QDR IB mezzanine card. HP c-Class server blades are
available in two form-factors and server node configurations to meet various density goals. To meet
extreme density goals, the half-height HP BL2x220c server blade includes two server nodes. Each
node can support two quad-core Intel Xeon 5400-series processors and a slot for a mezzanine board,
providing a maximum of 32 nodes and 256 cores per c7000 enclosure.
NOTE:
The DDR HCA mezzanine card should be installed in a PCIe x8
connector for maximum InfiniBand performance. The QDR HCA
mezzanine card is supported on the ProLiant G6 blades with PCIe
x8 Gen 2 mezzanine connectors
Figure 9 shows a full bandwidth fat-tree configuration of HP BladeSystem c-Class components
providing 576 nodes in a cluster. Each c7000 enclosure includes an HP 4x QDR IB Switch, which
provides 16 downlinks for server blade connection and 16 QSFP uplinks for fabric connectivity.
Spine-level fabric connectivity is provided through sixteen 36-port Voltaire 4036 QDR InfiniBand
Switches
. The Voltaire 36-port switches provide 40-Gbps (per port) performance and offer fabric
2
management capabilities.
Qualified, marketed, and supported by HP.
2
11