HP 376227-B21 - ProLiant InfiniBand 4x Fabric Copper Switch Руководство по внедрению - Страница 2

Просмотреть онлайн или скачать pdf Руководство по внедрению для Переключатель HP 376227-B21 - ProLiant InfiniBand 4x Fabric Copper Switch. HP 376227-B21 - ProLiant InfiniBand 4x Fabric Copper Switch 9 страниц. Fabric clustering system 24-port 4x fabric copper switch installation guide, march 2005
Также для HP 376227-B21 - ProLiant InfiniBand 4x Fabric Copper Switch: Белая книга (13 страниц), Руководство по установке (28 страниц)

HP 376227-B21 - ProLiant InfiniBand 4x Fabric Copper Switch Руководство по внедрению

Overview

This whitepaper explains how to implement an Oracle Applications Server (AS) J2EE solution for the
database connection using HP ProLiant servers, HP StorageWorks, and HP InfiniBand™ options. The
whitepaper is intended for system administrators, system architects, and systems integrators who are
considering the advantages of HP InfiniBand-based systems in an Oracle AS J2EE environment.

Oracle Application Server overview

Oracle AS is a fully integrated application server – including a J2EE Compatible container, an
enterprise portal, wireless, business intelligence, single sign-on using a LDAP compliant directory, and
much more. It is the integrated nature of Oracle AS that distinguishes it in the marketplace. Oracle
strives not only to provide a container for J2EE applications, but to enable J2EE applications to be
managed more easily, deployed as web services, accessed via a single sign-on, made more secure,
and tied more closely to business level activities.
Typically, the Oracle AS is installed on its own server or set of servers and these servers access an
Oracle database instance on another server across a network. The most typical network connection
between Oracle AS server and the Oracle database server is via TCP/IP over Ethernet.
A typical Oracle Application Server system has the following components:
One or more application servers
Oracle database instance
Public local area network (LAN)

Advantages of InfiniBand in an Oracle Application Server environment

The most common method of communication between application server machines and database
server machines today is TCP/IP over Ethernet. TCP/IP and Ethernet both have limitations. The TCP/IP
processing on the application server and database server machines can result in substantial CPU
utilization on both machines with typical latencies of 50 to 100 microseconds (µs). Additionally,
observed Gigabit Ethernet throughput is limited to approximately 0.8 gigabits per second (Gbps) on
a single network interface card (NIC).
InfiniBand, a standards-based alternative to Ethernet, overcomes the performance limitations of
TCP/IP-based solutions. The key advantages of InfiniBand over TCP/IP and Ethernet is InfiniBand's
high throughput, low CPU utilization, and ultra-low latency. The current InfiniBand bandwidth of 10
1
Gbps
provides observed performance over 8 Gbps. CPU utilization of only 1-3 % with latency less
than 5 µs is possible.
InfiniBand supports its own InfiniBand-specific protocol, sockets direct protocol (SDP). This is the
standard wire protocol for the InfiniBand Architecture (IBA) for support of stream sockets
(SOCK_STREAM) networking over IBA.
There are two different versions of SDP protocol that are supported by Oracle. The first is native SDP
support, where the Oracle product implements the SDP directly. The second protocol, called
Transparent SDP, is where the Oracle product is configured to use TCP, and the SDP protocol
provider converts TCP connections to SDP without any configuration changes at the Oracle level.
Native SDP provides support for asynchronous I/O, whereas Transparent SDP only supports
synchronous I/O.
Because it supports asynchronous I/O, native SDP supports one of the most important features of
InfiniBand – remote direct memory access (RDMA). RDMA is a communications technique that allows
data to be transmitted from the memory of one computer to the memory of another computer without
1
InfiniBand 10Gbps bandwidth is represented as 4x or four lanes of traffic each at 2.5 Gbps.
2