HP 489183-B21 - InfiniBand DDR Switch Handbuch verwenden - Seite 12
Blättern Sie online oder laden Sie pdf Handbuch verwenden für Schalter HP 489183-B21 - InfiniBand DDR Switch herunter. HP 489183-B21 - InfiniBand DDR Switch 15 Seiten. Hp blc 4x ddr infiniband gen 2 switch installation guide
Auch für HP 489183-B21 - InfiniBand DDR Switch: Häufig gestellte Fragen (4 seiten), Installationshandbuch (22 seiten)
Figure 9. HP BladeSystem c-Class 576-node cluster configuration
1
HP c7000 Enclosure
16 HP BL280c G6
server blades
w/4x QDR HCAs
16
HP 4x QDR IB
Interconnect Switch
36-Port QDR
IB Switch
1
Total nodes
Total processor cores
Memory
Storage
Interconnect
The HP Unified Cluster Portfolio includes a range of hardware, software, and services that provide
customers a choice of pre-tested, pre-configured systems for simplified implementation, fast
deployment, and standardized support.
HP solutions optimized for HPC:
HP Cluster Platforms – flexible, factory integrated/tested systems built around specific platforms,
backed by HP warranty and support, and built to uniform, worldwide specifications
HP Scalable File Share (HP SFS) – high-bandwidth, scalable HP storage appliance for Linux clusters
HP Financial Services Industry (FSI) solutions – defined solution stacks and configurations for real-
time market data systems
HP and partner solutions optimized for scale-out database applications:
HP Oracle Exadata Storage
HP Oracle Database Machine
HP BladeSystem for Oracle Optimized Warehouse (OOW)
HP Cluster Platforms are built around specific hardware and software platforms and offer a choice of
interconnects. For example, the HP Cluster Platform CL3000BL uses the HP BL2x220c G5, BL280c
G6, and BL460c blade servers as the compute node with a choice of GbE or InfiniBand
interconnects. No longer unique to Linux or HP-UX environments, HPC clustering is now supported
through Microsoft Windows Server HPC 2003, with native support for HP-MPI.
2
HP c7000 Enclosure
16 HP BL280c G6
server blades
w/4x QDR HCAs
16
HP 4x QDR IB
Interconnect Switch
36-Port QDR
IB Switch
2
576 (1 per blade)
4608 (2 Nehalem processors per node, 4 cores per processor
28 TB w/4 GB DIMMs (48 GB per node)
or 55 TB w/ 8 GB DIMMS (96 GB per node)
2 NHP SATA or SAS per node
1:1 full bandwidth (non-blocking),
3 switch hops maximum, fabric redundancy
36
HP c7000 Enclosure
16 HP BL280c G6
server blades
w/4x QDR HCAs
16
HP 4x QDR IB
Interconnect Switch
36-Port QDR
IB Switch
16
12