Cisco Nexus 1000V Handbuch für den Einsatz - Seite 37
Blättern Sie online oder laden Sie pdf Handbuch für den Einsatz für Netzwerk-Router Cisco Nexus 1000V herunter. Cisco Nexus 1000V 49 Seiten. For microsoft hyper-v
Auch für Cisco Nexus 1000V: Datenblatt (8 seiten), Handbuch für den Einsatz (25 seiten), Handbuch (6 seiten), Handbuch zur Installation (27 seiten)
![Cisco Nexus 1000V Handbuch für den Einsatz](/uploads/products/server2/555652/555652.png)
Some Cisco UCS functions are similar to those offered by the Cisco Nexus 1000V Switches, but with a different
set of applications and design scenarios. Cisco UCS offers the capability to present adapters to physical and
virtual machines directly. This solution is a hardware-based Cisco Data Center Virtual Machine Fabric Extender
(VM-FEX) solution, whereas the Cisco Nexus 1000V is a software-based VN-Link solution. This document does
not discuss the differences between the two solutions. This section discusses how to deploy the Cisco Nexus
1000V in a Cisco UCS blade server environment, including best practices for configuring the Cisco Nexus 1000V
for Cisco UCS. It also explains how some of the advanced features of both Cisco UCS and the Cisco Nexus
1000V facilitate the recommended deployment of the solution.
Cisco Virtual Interface Card
The Cisco UCS M81KR Virtual Interface Card (VIC), VIC 1240, and VIC 1280 are virtualization-optimized Fibre
Channel over Ethernet (FCoE) mezzanine cards designed for use with Cisco UCS B-Series Blade Servers. A VIC
is a dual-port 10 Gigabit Ethernet mezzanine card that supports up to 128 Peripheral Component Interconnect
Express (PCIe) standards-compliant virtual interfaces that can be dynamically configured so that both their
interface type (NIC or host bus adapter [HBA]) and identity (MAC address and worldwide name [WWN]) are
established using just-in-time provisioning. In addition, the Cisco VIC supports Cisco VN-Link technology, which
adds server-virtualization intelligence to the network. Each card has a pair of 10 Gigabit Ethernet connections to
the Cisco UCS backplane that support the IEEE 802.1 Data Center Bridging (DCB) function to facilitate I/O
unification within these adapters. On each adapter type, one of these backplane ports is connected through
10GBASE-KR to the A-side I/O module; then that connection goes to the A-side fabric interconnect. The other
connection is 10GBASE-KR to the B-side I/O module; that connection goes to the B-side fabric interconnect.
The Cisco UCS 6100 Series Fabric Interconnects operate in two discrete modes with respect to flows in Cisco
UCS. The first is assumed to be more common and is called end-host mode; the second is switched mode, in
which the fabric interconnect acts as a normal Ethernet bridge device. Discussion of the differences between
these modes is beyond the scope of this document; however, the Cisco Nexus 1000V Switches on the server
blades will operate regardless of the mode of the fabric interconnects. With respect to a Microsoft environment
running the Cisco Nexus 1000V Switch for Microsoft Hyper-V, the preferred solution is end-host mode to help
ensure predictable traffic flows.
Service Profile Design
Service profiles in Cisco UCS allow the administrator to create a consistent configuration across multiple Cisco
UCS server blades. The service profile definition includes server identity information such as LAN and SAN
addressing, I/O configurations, firmware versions, boot order, network VLAN, physical ports, and quality-of-service
(QoS) policies. For more information about how to configure service profiles, please refer to the Cisco UCS
documentation.
Using service profiles, the administrator can apply security and QoS policies on multiple vEth interfaces. Virtual
machines running Microsoft Windows Server 2012 on the Cisco UCS B-Series see these vNICs as physical
adapters connected to the server.
When the Cisco Nexus 1000V is deployed on Cisco UCS B-Series blades, the recommended topology is the one
shown in Figure 31. In this topology, the Cisco Nexus 1000V Logical Switch sees traffic only from the workload
virtual machines. Because the infrastructure traffic goes directly to the vNICs, you should create separate service
profiles for these two types of interfaces.
© 2013 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public Information.
Page 37 of 48