HP 226824-001 - ProLiant - ML750 Manuel d'introduction - Page 2

Parcourez en ligne ou téléchargez le pdf Manuel d'introduction pour {nom_de_la_catégorie} HP 226824-001 - ProLiant - ML750. HP 226824-001 - ProLiant - ML750 12 pages. Visualization and acceleration in hp proliant servers
Également pour HP 226824-001 - ProLiant - ML750 : Questions fréquemment posées (4 pages), Manuel de mise en œuvre (35 pages), Livre blanc technique (12 pages), Mise à jour du micrologiciel (9 pages), Vue d'ensemble (20 pages), Manuel de mise en œuvre (26 pages), Manuel d'introduction (22 pages), Manuel de dépannage (18 pages), Manuel de mise en œuvre (11 pages), Manuel d'installation (2 pages), Manuel de configuration (2 pages), Manuel d'introduction (19 pages), Manuel de mise à jour (9 pages), Manuel de mise à jour (16 pages), Manuel d'introduction (10 pages), Dossier technologique (9 pages)

HP 226824-001 - ProLiant - ML750 Manuel d'introduction

Abstract

Remote Direct Memory Access (RDMA) is a data exchange technology that improves network
performance by streamlining data processing operations. This technology brief describes how RDMA
can be applied to the two most common network interconnects, Ethernet and InfiniBand, to provide
efficient throughput in the data center.

Introduction

Advances in computing and storage technologies are placing a considerable burden on the data
center's network infrastructure. As network speeds increase and greater amounts of data are moved,
it takes more processing power to process data communication.
A typical data center today uses a variety of disparate interconnects for servers-to-servers and server-
to-storage links. The use of multiple system and peripheral bus interconnects decreases compatibility,
interoperability, and management efficiency and drives up the cost of equipment, software, training,
and the personnel needed to operate and maintain them. To increase efficiency and lower costs, data
center network infrastructure must be transformed into a unified, flexible, high-speed fabric.
Unified high-speed infrastructures require a high-bandwidth, low-latency fabric that can move data
efficiently and securely between servers, storage, and applications. Evolving fabric interconnects and
associated technologies provide more efficient and scalable computing and data transport within the
data center by reducing the overhead burden on processors and memory. More efficient
communication protocols and technologies, some of which run over existing infrastructures, free
processors for more useful work and improve infrastructure utilization. In addition, the ability of fabric
interconnects to converge functions in the data center over fewer, or possibly even one, industry-
standard interconnect presents significant benefits.
Remote direct memory access (RDMA) is a data exchange technology that promises to accomplish
these goals and make iWARP (a protocol that specifies RDMA over TCP/IP) a reality. Applying RDMA
to switched-fabric infrastructures such as InfiniBand™ (IB) can enhance the performance of clustered
systems handling large data transfers.

Limitations of TCP/IP

Transmission Control Protocol and Internet Protocol (TCP/IP) represent the suite of protocols that drive
the Internet. Every computer connected to the Internet uses these protocols to send and receive
information. Information is transmitted in fixed data formats (packets), so that heterogeneous systems
can communicate. The TCP/IP stack of protocols was developed to be an internetworking language
for all types of computers to transfer data across different physical media. The TCP and IP protocol
suite includes over 70,000 software instructions that provide the necessary reliability mechanisms,
error detection and correction, sequencing, recovery, and other communications features.
Computers implement the TCP/IP protocol stack to process outgoing and incoming data packets.
Today, TCP/IP stacks are usually implemented in operating system software and packets are handled
by the main (host) processor. As a result, protocol processing of incoming and outgoing network
traffic consumes processor cycles—cycles that could otherwise be used for business and other
productivity applications. The processing work and associated time delays may also reduce the ability
of applications to scale across multiple servers. As network speeds move beyond 1 gigabit per
second (Gb/s) and larger amounts of data are transmitted, processors become burdened by TCP/IP
protocol processing and data movement.
The burden of protocol stack processing is compounded by a finite amount of memory bus
bandwidth. Incoming network data consumes the memory bus bandwidth because each data packet
2