Hyper-V features a hypervisor-based architecture that is shown in Figure 7. The hypervisor virtualizes processors and memory and provides mechanisms for the virtualization stack in the root partition to manage child partitions (VMs) and expose services such as I/O devices to the VMs. The root partition owns and has direct access to the physical I/O devices. The virtualization stack in the root partition provides a memory manager for VMs, management APIs, and virtualized I/O devices. It also implements emulated devices such as Integrated Device Electronics (IDE) and PS/2 but supports synthetic devices for increased performance and reduced overhead.
Root Partition
I/O
Stack
Drivers
Child Partition
I/O
Stack
VSCs
Server
Child Partition
I/O
Stack
VSCs
Server
Hypervisor
Devices
Processors
Memory
VMBus
VMBus
VMBus
Shared Memory
VSPs
VSPs
OS Kernel Enlightenments (WS08+)
Figure 7. Hyper-V Hypervisor-Based Architecture
The synthetic I/O architecture consists of VSPs in the root partition and VSCs in the child partition. Each service is exposed as a device over VMBus, which acts as an I/O bus and enables high-performance communication between VMs that use mechanisms such as shared memory. Plug and Play enumerates these devices, including VMBus, and loads the appropriate device drivers (VSCs). Services other than I/O are also exposed through this architecture.
Windows Server 2008 and Windows Server 2008 R2 feature enlightenments to the operating system to optimize its behavior when it is running in VMs. The benefits include reducing the cost of memory virtualization, improving multiprocessor scalability, and decreasing the background CPU usage of the guest operating system.
Server Configuration
This section describes best practices for selecting hardware for virtualization servers and installing and setting up Windows Server 2008 R2 for the Hyper-V server role.
Hardware Selection
The hardware considerations for Hyper-V servers generally resemble those of non-virtualized servers, but Hyper-V servers can exhibit increased CPU usage, consume more memory, and need larger I/O bandwidth because of server consolidation. For more information, refer to “Choosing and Tuning Server Hardware” earlier in this guide.
Processors.
Hyper-V in Windows Server 2008 R2 presents the logical processors as one or more virtual processors to each active virtual machine. You can achieve additional run-time efficiency by using processors that support Second Level Address Translation (SLAT) technologies such as EPT or NPT.
Hyper-V in Windows Server 2008 R2 adds support for deep CPU idle states, timer coalescing, core parking, and guest idle state. These features allow for better energy efficiency over previous versions of Hyper-V.
Cache.
Hyper-V can benefit from larger processor caches, especially for loads that have a large working set in memory and in VM configurations in which the ratio of virtual processors to logical processors is high.
Memory.
The physical server requires sufficient memory for the root and child partitions. Hyper-V first allocates the memory for child partitions, which should be sized based on the needs of the expected load for each VM. Having additional memory available allows the root to efficiently perform I/Os on behalf of the VMs and operations such as a VM snapshot.
Networking.
If the expected loads are network intensive, the virtualization server can benefit from having multiple network adapters or multiport network adapters. Each network adapter is assigned to its own virtual switch, allowing each virtual switch to service a subset of virtual machines. When hosting multiple VMs, using multiple network adapters allows for distribution of the network traffic among the adapters for better overall performance.
To reduce the CPU usage of network I/Os from VMs, Hyper-V can use hardware offloads such as Large Send Offload (LSOv1), TCPv4 checksum offload, Chimney, and VMQ.
For details on network hardware considerations, see “Performance Tuning for the Networking Subsystem” earlier in this guide.
Storage.
The storage hardware should have sufficient I/O bandwidth and capacity to meet current and future needs of the VMs that the physical server hosts. Consider these requirements when you select storage controllers and disks and choose the RAID configuration. Placing VMs with highly disk-intensive workloads on different physical disks will likely improve overall performance. For example, if four VMs share a single disk and actively use it, each VM can yield only 25 percent of the bandwidth of that disk. For details on storage hardware considerations and discussion on sizing and RAID selection, see “Performance Tuning for the Storage Subsystem” earlier in this guide.
Server Core Installation Option
Windows Server 2008 and Windows Server 2008 R2 feature the Server Core installation option. Server Core offers a minimal environment for hosting a select set of server roles including Hyper-V. It features a smaller disk, memory profile, and attack surface. Therefore, we highly recommend that Hyper-V virtualization servers use the Server Core installation option. Using Server Core in the root partition leaves additional memory for the VMs to use (approximately 80 MB for commit charge on 64-bit Windows).
Server Core offers a console window only when the user is logged on, but Hyper-V exposes management features through WMI so administrators can manage it remotely (for more information, see "Resources" later in this guide).
Dedicated Server Role
The root partition should be dedicated to the virtualization server role. Additional server roles can adversely affect the performance of the virtualization server, especially if they consume significant CPU, memory, or I/O bandwidth. Minimizing the server roles in the root partition has additional benefits such as reducing the attack surface and the frequency of updates.
System administrators should consider carefully what software is installed in the root partition because some software can adversely affect the overall performance of the virtualization server.
Hyper-V supports and has been tuned for a number of different guest operating systems. The number of virtual processors that are supported per guest depends on the guest operating system. See “Resources” later in this guide or the documentation provided with Hyper-V for a list of the supported guest operating systems and the number of virtual processors supported for each operating system.
CPU Statistics
Hyper-V publishes performance counters to help characterize the behavior of the virtualization server and break out the resource usage. The standard set of tools for viewing performance counters in Windows includes Performance Monitor (Perfmon.exe) and Logman.exe, which can display and log the Hyper-V performance counters. The names of the relevant counter objects are prefixed with “Hyper-V.”
You should always measure the CPU usage of the physical system by using the Hyper-V Hypervisor Logical Processor performance counters. The CPU utilization counters that Task Manager and Performance Monitor report in the root and child partitions do not accurately capture the CPU usage. Use the following performance counters to monitor performance:
\Hyper-V Hypervisor Logical Processor (*) \% Total Run Time
The counter represents the total non-idle time of the logical processor(s).
\Hyper-V Hypervisor Logical Processor (*) \% Guest Run Time
The counter represents the time spent executing cycles within a guest or within the host.
\Hyper-V Hypervisor Logical Processor (*) \% Hypervisor Run Time
The counter represents the time spent executing within the hypervisor.
\Hyper-V Hypervisor Root Virtual Processor (*) \ *
The counters measure the CPU usage of the root partition.
\Hyper-V Hypervisor Virtual Processor (*) \ *
The counters measure the CPU usage of guest partitions.
Share with your friends: |