HP 489183-B21 - InfiniBand DDR Switch 매뉴얼 사용 - 페이지 12

{카테고리_이름} HP 489183-B21 - InfiniBand DDR Switch에 대한 매뉴얼 사용을 온라인으로 검색하거나 PDF를 다운로드하세요. HP 489183-B21 - InfiniBand DDR Switch 15 페이지. Hp blc 4x ddr infiniband gen 2 switch installation guide
HP 489183-B21 - InfiniBand DDR Switch에 대해서도 마찬가지입니다: 자주 묻는 질문 (4 페이지), 설치 매뉴얼 (22 페이지)

HP 489183-B21 - InfiniBand DDR Switch 매뉴얼 사용
Figure 9. HP BladeSystem c-Class 576-node cluster configuration
1
HP c7000 Enclosure
16 HP BL280c G6
server blades
w/4x QDR HCAs
16
HP 4x QDR IB
Interconnect Switch
36-Port QDR
IB Switch
1
Total nodes
Total processor cores
Memory
Storage
Interconnect
The HP Unified Cluster Portfolio includes a range of hardware, software, and services that provide
customers a choice of pre-tested, pre-configured systems for simplified implementation, fast
deployment, and standardized support.
HP solutions optimized for HPC:
HP Cluster Platforms – flexible, factory integrated/tested systems built around specific platforms,
backed by HP warranty and support, and built to uniform, worldwide specifications
HP Scalable File Share (HP SFS) – high-bandwidth, scalable HP storage appliance for Linux clusters
HP Financial Services Industry (FSI) solutions – defined solution stacks and configurations for real-
time market data systems
HP and partner solutions optimized for scale-out database applications:
HP Oracle Exadata Storage
HP Oracle Database Machine
HP BladeSystem for Oracle Optimized Warehouse (OOW)
HP Cluster Platforms are built around specific hardware and software platforms and offer a choice of
interconnects. For example, the HP Cluster Platform CL3000BL uses the HP BL2x220c G5, BL280c
G6, and BL460c blade servers as the compute node with a choice of GbE or InfiniBand
interconnects. No longer unique to Linux or HP-UX environments, HPC clustering is now supported
through Microsoft Windows Server HPC 2003, with native support for HP-MPI.
2
HP c7000 Enclosure
16 HP BL280c G6
server blades
w/4x QDR HCAs
16
HP 4x QDR IB
Interconnect Switch
36-Port QDR
IB Switch
2
576 (1 per blade)
4608 (2 Nehalem processors per node, 4 cores per processor
28 TB w/4 GB DIMMs (48 GB per node)
or 55 TB w/ 8 GB DIMMS (96 GB per node)
2 NHP SATA or SAS per node
1:1 full bandwidth (non-blocking),
3 switch hops maximum, fabric redundancy
36
HP c7000 Enclosure
16 HP BL280c G6
server blades
w/4x QDR HCAs
16
HP 4x QDR IB
Interconnect Switch
36-Port QDR
IB Switch
16
12