Friday 4 January 2013

hyper-v studies

Installing and configuring host and parent settings

Before adding the hyper-v role

Before adding the hyper-v role you need to spec your VM hosts. How many VMs will you need etc. How much RAM CPU, storage etc.

You can see a full list of supported VM guest OS's here, make sure you are supported:
http://technet.microsoft.com/en-us/library/cc794868(WS.10).aspx
Just because its not supported doesn't mean that it won't work but if you have an issue and need to call microsoft they will not support it.

Hyper-v integration services
Always install the hyper-v integration tools where possible. FYI when you upgrade the kernel on linux systems you need to update the hyper-v integrations tools also. I have a blog post about doing this.

BIOS settings for the Hyper-V hosts
You need to turn on the following options in the BIOS on the hyper-v hosts. Sometimes it is enabled by default but still needs to be checked.

Hardware assisted virtualization
AMD: AMD-V
Intel: VT

Hardware enabled data execution prevention
AMD: NX no execute bit
Intel: XD execute disable bit

You can get itainium CPUs to work but its pointless, you need to use a 3rd party hypervisor and they are end of life.

Adding the hyper-v role

A role is a primary function of the server. Features enchance the roles on the servers. Hyper-v is a role, failover clustering is a feature. To add the hyper-v role

GUI:
Use server manager -> add roles -> add the hyper-v role

Command line:
DISM /online /enable-feature /featurename:Microsoft-Hyper-V
start /w ocsetup Microsoft-Hyper-V

Powershell:
Add-WindowsFeature Hyper-v

Hyper-v server
Hyper-v server is a special OS just for hyper-v hosts. It doesn't have other roles but does have the failover clustering feature. Hyper-v server is free but you don't get any licenses for the guest OS's. Hyper-v on windows data center edition you get unlimited guest OS's. The hyper-v role is turned on by default. There is a text based interface on hyper-v server which you should used to set up remote management and then manage the server from your desktop.

Enabling hyper-v by SCVMM
As soon as you add a host to SCVMM it will check that the hyper-v role is added, if not it will add it.

Securing the hyper-v hosts
Isolate the host and guest networks
Use server core (smaller attack surface, less patches)
Server core can also be used for webservers in the DMZ etc
Bitlocker doesn't work with clustered disks keep your hosts in a physically secure enviornment

Enable remote management

Remoting with Hyper-v Manger
Right click -> connect to server

Remoting with RDP
start -> run -> mstsc
Conect to hosts or guest VM's

Remote Server Admin Tools
http://www.microsoft.com/en-us/download/details.aspx?id=7887
Once installed you can add hyper-v, failover clustering and more to administative tools

Remoting with RD connection manager
http://www.microsoft.com/en-us/download/details.aspx?id=21101
You can build a tree of host -> guests
Gives you a little thumbnail of each console you can click on each one to connect

Deploying the VMM agent with SCVMM
Default ports are 80 (management) and 443 (data)
Some users change these defaults for security reasons
DMZ servers need a local install
DMZ service is given a randomized name

Configuring firewall rules
Usually installing hyper-v the firewall is automatically configured
netsh advfirewall show rule name=all dir=in > fwrules.txt
notepad fwrules.txt
On server core use SConfig tool
Clustering the VMM library requires 'Remote Volume Management' fw setting enabled on all nodes

VMM Firewall ports
WinRM 80
SMB 443
DCOM 135
TDS 1433  (Tabular data stream)
WCF 8100 (VMM self service portal web server to VMM server)
HTTPS 443
BITS 443
VMRC 5900
RDP to vm hosts 2179
RDP to vms 3389
SFTP 22

Configure virtual networks and VLAN security

Virtual network manager
There are three network types:
  • External (Everything, other physical servers on the network)
  • Internal (VMs on the same host and the VM host)
  • Private (Only VMs on the same host can communicate, good for testing)
You are only allowed 1 external network per physical NIC on the host.
Internal networks connects VMs on the same host, VMs can also connect to the VM host.
Private networks enable VMs on the same host to communitcate only.

You can enable VLAN tagging. If you have 4 NICs you'll probably keep one for management, but you can completly isolate the other 3 networks with VLAN tagging.

Configuring MAC addresses
VM hosts have a pool of MAC addresses which it automatically assigns to VMs as they are created.
VMs are automatically assigned a dynamic MAC addresses by default.
Within a host they are checked for conflicts, but they are not checked between hosts which don't talk to each other. It's rare but a MAC conflict is possible. If you are using SCVMM it will check for conflicts on all hosts.

VLANs can be used for security. It can only be used with External and Internal networks. Most of the time you will use it with external networks and assign it to a physical NIC. It can be a good idea to rename your NICs so it's obvious what they are used for.

If you only have one NIC or limited NICs on your host you can check a box under allow management operating system to share this network adapter. This allows the host OS to do VLAN tagging and management on the single NIC.

I have seen issues with linux servers where the MAC address is hard coded on the server but for some reason it is migrated to another host and assigned a new MAC address which causes an issue. To resolve we needed to use static MACs.

Use static MAC addresses for DHCP
Use MAC address spoofing for NLB

Configure storage

Planing for disks and storage
Hyper-V hosts can use:
Direct attached storage (DAS) disks are in the hosts
Storage area networks (SAN)
  • Required for failover clustering so all VM hosts can access a disk
  • Host clustering: Fibre Channel, FCoE, Serial Attach SCSI (SAS), iSCSI
  • Guest Clustering: iSCSI

VMs require storage for:
  • Virtual hard disk files (VHD)
  • Snapshots (AVHD)
  • Failover clustering
  • Application data files
Fibre channel - Fast and expensive, working with fibre cables/connectors can be painful
iSCSI - Cheap but not as fast, uses ethernet, simple setup.

Hyper-v currently does not have a method (virtualized HBA that suppots fibre channel) to attach a VM to a fibre channel disk. No way to communicate VM -> hypervisor -> VM protocol -> physical storage device. If you need to do this you have to use iSCSI.

iSCSI will never be as fast as fibre channel. However performance is decent for most small medium implementations. Make sure you are using switches (and cables) that are fit for purpose gigabit ethernet and jumbo frame. You may need to configure the NICs on your VM hosts to use jumbo frame.

VM storage
Default locations
VHDs: C:\Users\Public\Documents\Hyper-V\virtual hard disks
VMs (config files): C:\ProgramData\Microsoft\Windows\Hyper-V
  • Virtual Machines (XML file)
  • Snapshots (.avhd)
Considerations
Performance
Hard drive space
Security
Shared storage for failover clustering

Multipath I/O (MPIO)
  • Multiple read/write paths from the VM to the storage
  • Provides redundant failover and load balancing support for disks or LUNs
  • Supports bandwidth aggregation
  • Distribute I/O transactions across multiple adapters
  • It is a windows server feature which can be added (look for latest updates / hotfixes)
Launch the console
start -> run -> mpiocpl

MPIO Devices
Lists devices and allows add/removing of new devices

Discover multi paths
Allows management of device instances and to add devices IDs for fibre channel devices

DSM Install
Install/Uninstall vendor or 3rd party device specific modules (DSM) the DSM could come from dell etc to allow MPIO to work with their storage array

Configuration Snapshot
Allows capture a snapshot of the current MPIO configuration

iSCSI
Cheap simple storage solution.
Support for failover clustering
Required for guest failover clustering
Uses the existing IP network.
Can be a storage array or DAS on a server by using the MS iSCSI target.
You can download the MS iSCSI target here:
http://www.microsoft.com/en-us/download/details.aspx?id=19867
Many of the skills from working with networking, ethernet, switches etc transfer to iSCSI. Its just the SCSI protocol over IP.

iSCSI target
This sits on the device (SAN or server) where the storage is. SANs have the target build in.

iSCSI Initiator
Initiator connects to iSCSI target (the target must be configured already)
Should use a dedicated NIC (it's a good idea to rename NICs iSCSI1 etc)
Can use any iSCSI target

Setting up simple connections
iSCSI target
Setup the MS iSCSI target on a server
You will see the disks that are presented in iSCSI target (admin tools)

iSCSI Initiator
I find its best not to use the quick connect button (the wrong nic may be selected) use the connect button, select MS iSCSI initiator, the NIC to use and the IP of the target.

  • Target: Create virtual disk
  • Initiator: Request access to disks
  • Target: Accept access request from initiators
  • Initiators: Refresh configuration to check connection
  • Initiators: Login to the target (enable automatic reconnections)
  • Servers: Initialize, format and bring disks online (disk management)
  • Now you can use these disks for your VMs or cluster
Executing iscsicli.exe commands
iscsicli is the CLI for iSCSI (needed for server core)
http://blogs.msdn.com/b/san/archive/2008/07/27/iscsi-initiator-command-line-reference-and-server-core-configuration.aspx


Configuring child/guest settings

Hyper-v manager
Hyper-v manager can be found in start -> administrative tools
You may need to install it via the RSAT rools if using a desktop.
You can connect to remote hyper-v hosts servers right click -> connecto to server

Hyper-v settings
Can be accessed by right clicking on a host and selecting hyper-v settings
Default locations for virtual hard disks, virtual machine config files
NUMA Spanning
User related settings like keyboard/mouse and credentials

Virtaul network manager
View the types of networks available
Add networks
View the mac address pool

Type 2 hypervisor
First type of hypervisor that came out
Guest OS VM's -> Hypervisor -> Host OS -> Hardware
Examples: Virtual PC & server, VMware workstation, KVM

Type 1 hyperviror
Guest OS VMs -> Hypervisor -> hardware
Examples: Hyper-v, Xen, VMware ESX

When you enable the hyper-v role on a server, the hypervisor imposes itself between the hardware and the OS. Kernel generally runs at ring 0 in the CPU. Hyper-v hypervisor runs at -1.

Classes of type 1 hypervisor
Monolithic (VMware ESX)
Management OS and Guest OS's run above the hypervisor
Drivers run at the hypervisor level
VMware is responsible for maintaining the drivers
Hardware compatibility list is critical when deploying ESX servers

Microkernel (Hyper-V)
Management OS partition, Guest OS child partition, virtualization stack and drivers run above the hypervisor
The drivers are the drivers supported by the OS
Drivers run within guests
Guest OS's access the drivers over the VMbus
Larger selection of drivers, easier to get updates

VM settings
Right click on a VM
BIOS
Memory
CPU
Hard drive
Ethernet
Com ports
Floppy disk drive
Must be IDE hard drive for boot
Other disks can be SCSI but the boot drive must be IDE
Integration Services (SCSI controllers, syntetic NICs)
Managemet options (what to do when VM is started / stopped)

Hard disk types
  • Fixed disk - If you create an 80GB disk it consumes 80GB of disk space on your storage. Can provide better performance for applications with high levels of disk activity.
  • Dynamically expanding disk - Grows as your data usage grows, if you create a 80GB disk is will only use 256KB and will grow as your data usage on the virtal disk grows.
  • Differencing disk - Point it at a disk already created and say that will be the base disk. That disk will become readonly and the differencing disk and all changes are in the differencing disk. Same idea as snapshots. Can be useful if you create a base install OS and create a differencing disk. It can also reduse the total file size. Users can get a shared base disk but they get their own differencing disk. If there is an issue with the base disk it can affect many users.
The issue mentioned above is the reason why MS do not recommend to use snapshots as a method for backing up production systems.

Storage options for virtual machines
You can have 2 IDE controllers
Each controller can have 2 devices
That's a total of 4 IDE devices per VM




Managing and monitorin virtual enviornments

Ensuring high availability and recoverability

Performing migration

Configuring remote desktop Role Services Infrastructure

Active directory domain controllers can be virtualised but you cannot use snapshots, it is advised that you have some physical DCs also. Make sure you have full backsups with system state running. Make sure you have NTP setup and working. There seems to be a wide difference of opinion here. Some people say do not virtualise them others do, and some people mix virtual and physical.
Exchange can be virtualised but only certain versions on certain OS's are supported by MS so check that before attempting it. Also unified communications is not supported virtualised.

No comments:

Post a Comment