HP 226824-001 - ProLiant - ML750 Manual de introducción - Página 3

Navegue en línea o descargue pdf Manual de introducción para Escritorio HP 226824-001 - ProLiant - ML750. HP 226824-001 - ProLiant - ML750 12 páginas. Visualization and acceleration in hp proliant servers
También para HP 226824-001 - ProLiant - ML750: Preguntas frecuentes (4 páginas), Manual de aplicación (35 páginas), Libro Blanco técnico (12 páginas), Actualización del firmware (9 páginas), Visión general (20 páginas), Manual de aplicación (26 páginas), Manual de introducción (22 páginas), Manual de solución de problemas (18 páginas), Manual de aplicación (11 páginas), Manual de instalación (2 páginas), Manual de configuración (2 páginas), Manual de introducción (19 páginas), Manual de actualización (9 páginas), Manual de actualización (16 páginas), Manual de introducción (10 páginas), Resumen tecnológico (9 páginas)

HP 226824-001 - ProLiant - ML750 Manual de introducción
must be transferred in and out of memory several times (Figure 1): received data is written to the
device driver buffer, copied into an operating system (OS) buffer, and then copied into application
memory space.
Figure 1. Typical flow of network data in receiving host
NOTE: The actual number of
memory copies varies depending on
OS (Example: Linux uses 2).
These copy operations add latency, consume memory bus bandwidth, and require host processor
(CPU) intervention. In fact, the TCP/IP protocol overhead associated with 1 Gb of Ethernet traffic can
increase system processor utilization by 20 to 30 percent. Consequently, software overhead for
10 Gb Ethernet operation has the potential to overwhelm system processors. An InfiniBand network
using TCP operations to satisfy compatibility issues will suffer from the same processing overhead
problems that Ethernet networks have.

RDMA solution

Inherent processor overhead and constrained memory bandwidth are performance obstacles for
networks that use TCP, whether out of necessity (Ethernet) or compatibility (InfiniBand).
For Ethernet, the use of a TCP/IP offload engine (TOE) and RDMA can diminish these obstacles. A
network interface adapter (NIC) with a TOE assumes TCP/IP processing duties, freeing the host
processor for other tasks. The capability of a TOE is defined by its hardware design, the OS
programming interface, and the application being run.
RDMA technology was developed to move data from the memory of one computer directly into the
memory of another computer with minimal involvement from their processors. The RDMA protocol
includes information that allows a system to place transferred data directly into its final memory
destination without additional or interim data copies. This "zero copy" or "direct data placement"
(DDP) capability provides the most efficient network communication possible between systems.
Since the intent of both a TOE and RDMA is to relieve host processors of network overhead, they are
sometimes confused with each other. However, the TOE is primarily a hardware solution that
specifically takes responsibility of TCP/IP operations, while RDMA is a protocol solution that operates
at the upper layers of the network communication stack. Consequently, TOEs and RDMA can work
together: a TOE can provide localized connectivity with a device while RDMA enhances the data
throughput with a more efficient protocol.
For InfiniBand, RDMA operations provide an even greater performance benefit since InfiniBand
architecture was designed with RDMA as a core capability (no TOE needed).
RDMA provides a faster path for applications to transmit messages between network devices and is
applicable to both Ethernet and InfiniBand. Both these interconnects can support all new and existing
network standards such as Sockets Direct Protocol (SDP), iSCSI Extensions for RDMA (iSER), Network
File System (NFS), Direct Access File System (DAFS), and Message Passing Interface (MPI).
Memory
Chipset
Network I/F
CPU
3