HP 226824-001 - ProLiant - ML750 Wprowadzenie Podręcznik - Strona 2

Przeglądaj online lub pobierz pdf Wprowadzenie Podręcznik dla Pulpit HP 226824-001 - ProLiant - ML750. HP 226824-001 - ProLiant - ML750 12 stron. Visualization and acceleration in hp proliant servers
Również dla HP 226824-001 - ProLiant - ML750: Często zadawane pytania (4 strony), Podręcznik wdrażania (35 strony), Biała księga techniczna (12 strony), Aktualizacja oprogramowania sprzętowego (9 strony), Przegląd (20 strony), Podręcznik wdrażania (26 strony), Wprowadzenie Podręcznik (22 strony), Instrukcja rozwiązywania problemów (18 strony), Podręcznik wdrażania (11 strony), Instrukcja instalacji (2 strony), Podręcznik konfiguracji (2 strony), Wprowadzenie Podręcznik (19 strony), Aktualizacja instrukcji (9 strony), Aktualizacja instrukcji (16 strony), Wprowadzenie Podręcznik (10 strony), Krótki opis technologii (9 strony)

HP 226824-001 - ProLiant - ML750 Wprowadzenie Podręcznik

Abstract

Remote Direct Memory Access (RDMA) is a data exchange technology that improves network
performance by streamlining data processing operations. This technology brief describes how RDMA
can be applied to the two most common network interconnects, Ethernet and InfiniBand, to provide
efficient throughput in the data center.

Introduction

Advances in computing and storage technologies are placing a considerable burden on the data
center's network infrastructure. As network speeds increase and greater amounts of data are moved,
it takes more processing power to process data communication.
A typical data center today uses a variety of disparate interconnects for servers-to-servers and server-
to-storage links. The use of multiple system and peripheral bus interconnects decreases compatibility,
interoperability, and management efficiency and drives up the cost of equipment, software, training,
and the personnel needed to operate and maintain them. To increase efficiency and lower costs, data
center network infrastructure must be transformed into a unified, flexible, high-speed fabric.
Unified high-speed infrastructures require a high-bandwidth, low-latency fabric that can move data
efficiently and securely between servers, storage, and applications. Evolving fabric interconnects and
associated technologies provide more efficient and scalable computing and data transport within the
data center by reducing the overhead burden on processors and memory. More efficient
communication protocols and technologies, some of which run over existing infrastructures, free
processors for more useful work and improve infrastructure utilization. In addition, the ability of fabric
interconnects to converge functions in the data center over fewer, or possibly even one, industry-
standard interconnect presents significant benefits.
Remote direct memory access (RDMA) is a data exchange technology that promises to accomplish
these goals and make iWARP (a protocol that specifies RDMA over TCP/IP) a reality. Applying RDMA
to switched-fabric infrastructures such as InfiniBand™ (IB) can enhance the performance of clustered
systems handling large data transfers.

Limitations of TCP/IP

Transmission Control Protocol and Internet Protocol (TCP/IP) represent the suite of protocols that drive
the Internet. Every computer connected to the Internet uses these protocols to send and receive
information. Information is transmitted in fixed data formats (packets), so that heterogeneous systems
can communicate. The TCP/IP stack of protocols was developed to be an internetworking language
for all types of computers to transfer data across different physical media. The TCP and IP protocol
suite includes over 70,000 software instructions that provide the necessary reliability mechanisms,
error detection and correction, sequencing, recovery, and other communications features.
Computers implement the TCP/IP protocol stack to process outgoing and incoming data packets.
Today, TCP/IP stacks are usually implemented in operating system software and packets are handled
by the main (host) processor. As a result, protocol processing of incoming and outgoing network
traffic consumes processor cycles—cycles that could otherwise be used for business and other
productivity applications. The processing work and associated time delays may also reduce the ability
of applications to scale across multiple servers. As network speeds move beyond 1 gigabit per
second (Gb/s) and larger amounts of data are transmitted, processors become burdened by TCP/IP
protocol processing and data movement.
The burden of protocol stack processing is compounded by a finite amount of memory bus
bandwidth. Incoming network data consumes the memory bus bandwidth because each data packet
2