(Figure 1). The VSM can run as a virtual machine on any Microsoft Hyper-V host or as a virtual service node on the Cisco Nexus 1010 and 1110. The VEM runs as a plug-in (extension) to the Microsoft Hyper-V switch in the hypervisor kernel, providing switching between virtual machines.
Cisco Nexus 1000V sees the VSMs and VEMs as modules. In the current release, a single VSM can manage up to 64 VEMs. The VSMs are always associated with slot numbers 1 and 2 in the virtual chassis. The VEMs are sequentially assigned to slots 3 through 66 based on the order in which their respective hosts were added to the Cisco Nexus 1000V Switch.
For network administrators, the combination of the Cisco Nexus 1000V feature set and the capability to define a port profile using the same syntax as for existing physical Cisco switches helps ensure that consistent policy is enforced without the burden of having to manage individual virtual switch ports.
A VSEM is created by connecting to the VSM management IP address and the switch administrator credentials. In Figure 3, a Cisco Nexus 1000V VSM is being added as a VSEM by connecting to the switch management IP address of 10.10.1.10 using HTTP. A RunAs account called VSM Admin has been created using the switch administrator credentials.
Switch instance has been created. When Cisco Nexus 1000V is used with Microsoft SCVMM, a Logical Switch that uses the Cisco Nexus 1000V as a forwarding extension is created on Microsoft SCVMM. This Logical Switch is then instantiated on all Microsoft Hyper-V hosts on which virtual networking needs to be managed with Cisco Nexus 1000V (Figure 4).
Page 9
Uplink profiles and port classifications are explained in the next sections of this document. Note: When a Cisco Nexus 1000V Logical Switch is created on Microsoft SCVMM, only one extension is used. The Cisco Nexus 1000V is used as a forwarding extension.
When the Cisco Nexus 1000V is used to manage the virtual access layer on Microsoft Hyper-V servers, the VSM administrator creates port profiles and network segments. The Microsoft SCVMM administrator uses the port profile created on the Cisco Nexus 1000V Switch for Microsoft Hyper-V to create a port classification.
Page 11
In Figure 7, the Cisco Nexus 1000V administrator has defined a simple port profile called RestrictedProfile that applies an access control list (ACL) network policy. Figure 7. Simple Port Profile Defined on the Cisco Nexus 1000 VSM The Microsoft SCVMM administrator uses RestrictedProfile when he creates a port classification. In Figure 8, the administrator is creating a port classification, also called RestrictedProfile, with only one port profile: the RestrictedProfile port profile defined on the VSM.
Port classifications are similar to port groups defined in VMware vCenter for VMware ESX environments. However, in VMware vCenter, creation of a port profile on the Cisco Nexus 1000V results in the automatic creation of a port group, whereas in Microsoft SCVMM, the user has to manually create a port classification. The extra step is needed because a port classification can represent network policies from more than one provider.
When the Cisco Nexus 1000V is used to manage the virtual access layer on Microsoft Hyper-V, Logical Networks and Network Sites are created from the VSM. Network sites are referred to as network segment pools on the VSM because they are a collection of VLAN and IP subnets: that is, network segments. Figure 10 shows an example of how a Logical Network and network segment pool (Network Site) are created on Microsoft SCVMM.
IP address from the pool is used as the static IP address on the virtual machine. When the Cisco Nexus 1000V is used to configure the Microsoft Hyper-V virtual network, the VSM administrator must define IP pools for a network segment.
Unlike a traditional Cisco switch, in which the management plane is integrated into the hardware, on the Cisco Nexus 1000V the VSM is deployed either as a virtual machine on a Microsoft Hyper-V server or as a virtual service blade (VSB) on the Cisco Nexus 1010 or 1110 appliance (Figure 15).
Some customers like to keep network management traffic in a network separate from the host management network. By default, the Cisco Nexus 1000V uses the management interface on the VSM to communicate with the VEM. However, this communication can be moved to the control interface by configuring server virtualization switch (SVS) mode to use the control interface.
Each instance of the Cisco Nexus 1000V is typically composed of two VSMs (in a high-availability pair) and one or more VEMs. The maximum number of VEMs supported by a VSM is 64.
Page 21
Some customers prefer to move the Microsoft Hyper-V host management interface behind a Microsoft virtual switch and share the physical interface with other virtual machines. In this scenario, no special Cisco Nexus 1000V configuration is needed to enable VSM-to-VEM communication (Figure 20).
Page 23
Typically, a pNIC does not use VLAN tags for communication; therefore, while moving the management vNIC behind the Cisco Nexus 1000V, set the management VLAN on the uplink profile to native. Failure to do so may lead to loss of connectivity to the host.
Page 24
It is highly recommended that user adds only the management pNIC to the Cisco Nexus 1000V while moving the management NIC behind the VEM. Other pNICs can be added to the Cisco Nexus 1000V after the module successfully attaches to the VSM...
Cisco Nexus 1000V Switch Installation Installation of the Cisco Nexus 1000V Switch is beyond the scope of this document. Figure 22 shows the Cisco Nexus 1000V installation steps at a high level for conceptual completeness. For guidance and detailed instructions about installation, please refer to the Cisco Nexus 1000V installation guide.
MAC address dynamically, through the pNICs in the server. Each VEM maintains a separate MAC address table. Thus, a single Cisco Nexus 1000V Switch may learn a given MAC address multiple times: as often as once per VEM. For example, one VEM may be hosting a virtual machine, and the virtual machine’s MAC address will be statically learned on the VEM.
Every ingress packet on a physical Ethernet interface is inspected to help ensure that the destination MAC address is internal to the VEM. If the source MAC address is internal to the VEM, the Cisco Nexus 1000V Switch will drop the packet. If the destination MAC address is external, the switch will drop the packet, preventing a loop back to the physical network.
Microsoft Hyper-V host. An Ethernet, or Eth, interface is represented in standard Cisco interface notation (EthX/Y) using the Cisco NX-OS naming convention “Eth” rather than a speed such as “Gig” or “Fast,” as is the custom in Cisco IOS Software. These Eth interfaces are module specific and are designed to be fairly static within the environment.
A port profile is a collection of interface-level configuration commands that are combined to create a complete network policy. The port profile concept is new, but the configurations in port profiles use the same Cisco syntax that is used to manage switch ports on traditional switches. The VSM administrator:...
Eth Port Profile Example Uplink port profiles are applied to a pNIC when a Microsoft Hyper-V host is first added to the Cisco Nexus 1000V Switch. The Microsoft SCVMM administrator is presented with a dialog box in which the administrator selects the pNICs to be associated with the VEM and the specific uplink port profiles to be associated with the pNICs.
The network segment command is a new command introduced in the Cisco Nexus 1000V Switch for Microsoft Hyper-V. Network segments are used to create Layer 2 networks on the VSM. In the first release of the Cisco Nexus 1000V Switch for Microsoft Hyper-V, only VLAN-based network segments are supported. Other segmentation technology is not supported in this release.
Cisco Nexus 1000V port classification and virtual machine network, a dynamic port profile is created on the Cisco Nexus 1000V VSM and is applied to the virtual switch port on which the virtual machine is deployed. These dynamic port profiles are shared by all virtual machines that have the same virtual machine network and port classification.
In addition to migrating the policy, the Cisco Nexus 1000V Switches move the virtual machine’s network state, such as the port counters and flow statistics. Virtual machines participating in traffic monitoring activities, such as Cisco NetFlow or Encapsulated Remote Switched Port Analyzer (ERSPAN), can continue these activities uninterrupted by Microsoft live migration operations.
Microsoft Active Directory servers, DNS servers, SQL servers, and Microsoft SCVMM and other Microsoft System Center roles. The Cisco Nexus 1000V VSM virtual machine should also be deployed on an infrastructure host or cluster. The Cisco Nexus 1000V Logical Switch (VEM) is not created on the infrastructure hosts; instead, the native Microsoft Hyper-V switch is used.
Data Virtual Machine Cluster The Cisco Nexus 1000V Logical Switch must be created only on Microsoft Hyper-V hosts that run workload virtual machines. As shown earlier in Figure 29, the Cisco Nexus 1000V Logical Switch (VEM) is not created on infrastructure hosts and is created only on workload hosts.
Microsoft Hyper-V switch. The workload virtual machines are deployed on the Cisco Nexus 1000V Logical Switch. The Logical Switch will have at least two adapters connected as the switch uplinks. vPC host mode (explained in detail later in this document) is the recommended configuration for the Cisco Nexus 1000V uplinks to help ensure the high availability of the workload virtual machines.
Some Cisco UCS functions are similar to those offered by the Cisco Nexus 1000V Switches, but with a different set of applications and design scenarios. Cisco UCS offers the capability to present adapters to physical and virtual machines directly. This solution is a hardware-based Cisco Data Center Virtual Machine Fabric Extender (VM-FEX) solution, whereas the Cisco Nexus 1000V is a software-based VN-Link solution.
Page 38
Cisco Nexus 1000V Switch uplinks. This configuration helps ensure that the uplinks are bound to a team. When a member link in the team fails, the Cisco Nexus 1000V VEM helps ensure that traffic from workload virtual machines fails over to one of the remaining links.
Page 39
NIC failover configuration required in the OS, hypervisor, or virtual machine. The Cisco VIC adapters - the Cisco UCS M81KR VIC, VIC 1240, and VIC 1280 adapter types - enable a fabric failover capability in which loss of connectivity on a path in use causes traffic to be remapped through a redundant path within Cisco UCS.
Another distinguishing feature of Cisco UCS is the capability of the VIC to perform CoS-based queuing in hardware. CoS is a value marked on Ethernet packets to indicate the priority in the network. The Cisco UCS VIC has eight traffic queues, which use CoS values of 0 through 7. The VIC also allows the network administrator to specify a minimum bandwidth that must be reserved for each CoS during congestion.
Cisco Nexus 1000V QoS configuration guide. Upstream Switch Connectivity The Cisco Nexus 1000V can be connected to any upstream switch (any Cisco switch as well as switches from other vendors) that supports standards-based Ethernet and does not require any additional capability to be present on the upstream switch to function properly.
This clustering is transparent to the Cisco Nexus 1000V. When the upstream switches are clustered, the Cisco Nexus 1000V Switch should be configured to use LACP with one port profile, using all the available links. This configuration will make more bandwidth available for the virtual machines and accelerate Live Migration.
Most access-layer switches do not support clustering technology, yet most Cisco Nexus 1000V designs require PortChannels to span multiple switches. The Cisco Nexus 1000V offers several ways to connect the Cisco Nexus 1000V Switch to upstream switches that cannot be clustered. To enable this spanning of switches, the Cisco Nexus 1000V provides a PortChannel-like method that does not require configuration of a PortChannel upstream.
Page 44
However, this approach does not prevent the Cisco Nexus 1000V Switch from constructing a PortChannel on its side, providing the required redundancy in the data center in the event of a failure. If a failure occurs, the Cisco Nexus 1000V Switch will send a gratuitous ARP packet to alert the upstream switch that the MAC address of the VEM learned on the previous link will now be learned on a different link, enabling failover in less than a second.
PortChannel. These algorithms can be divided into two categories: source-based hashing and flow-based hashing. The type of load balancing that the Cisco Nexus 1000V uses can be specified in VEM-level detail, so one VEM can implement flow-based hashing, using the better load sharing offered by that mode, and another VEM not connected to a clustered upstream switch can use MAC address pinning and thus source-based hashing.
Microsoft SCVMM console, Microsoft SCVMM can set the IP address and the default gateway on the virtual machines. When Cisco Nexus 1000V is used to manage virtual networking on Microsoft Hyper-V, the network administrator must define the IP-pool range to be used when virtual machines are deployed on a VLAN-based virtual machine network.
Page 48
Create an uplink network. The network uplink command is new in the Cisco Nexus 1000V Switch for Microsoft Hyper-V. Each uplink network configured on the VSM is available as a uplink port profile to the Microsoft SCVMM administrator. The example here creates an uplink network that uses the Ethernet profile UplinkProfile and allows the network segment pools: DMZ-SFO and DMZ-NY.
The comprehensive feature set of the Cisco Nexus 1000V allows the networking team to troubleshoot more rapidly any problems in the server virtualization environment, increasing the uptime of virtual machines and protecting the applications that propel the data center.