Crash course: Virtualization with KVM on Ubuntu Server
Learn how to get KVM running on Ubuntu Server, install multiple guests, manage storage, and migrate guests to new hosts
KVM, the Linux kernel hypervisor, is the up-and-coming enterprise virtualization contender. It's lean, mean, fast, and runs unmodified guest operating systems with ease. In this crash course we'll quickly get KVM up and running on Ubuntu Server, install multiple guests, manage storage, and migrate guests to new hosts.
KVM and Ubuntu Server
KVM, which stands for "kernel-based virtual machine", was first developed by Qumranet. Red Hat bought Qumranet in 2008 and made KVM the core of Red Hat Enterprise Virtualization for Servers. KVM is licensed under the GPL and is part of the mainline kernel, so any Linux distribution can support it. KVM is a Type 2 hypervisor, which means it runs inside an operating system. Some popular Type 2 hypervisors are Xen and VirtualBox. Type 1 hypervisors, such as IBM's z/VM and VMWare ESXi, run on the bare metal and don't need operating systems. KVM supports pretty much any guest operating systems: Linux, Mac OS X, Unix, Windows, and whatever else you have lying around.
Ubuntu Server, like KVM, is growing into an enterprise powerhouse. Ubuntu supports KVM on x86 and x86_64. Unlike Red Hat and Novell, the big two enterprise Linux vendors, you can download and test Ubuntu without having to register or wade through sales pitches. If you want training, commercial support, or online services like the Landscape systems manager or Ubuntu cloud services, they're there when you want them.
For this crash course you'll need an Ubuntu computer with an Intel VT or AMD-V CPU, because these include special extensions for native support of virtual machines.(See KVM's CPU support page for more information.) I'm using 11.04, Natty Narwhal 64-bit for this article. I recommend using Ubuntu Server for your production KVM server, but for testing, any Ubuntu will do. Use this command to see if your x86 CPU has virtualization extensions:
$ egrep -o '(vmx|svm)' /proc/cpuinfo vmx vmx
This example shows a dual-core Intel CPU with virtualization support. You'll probably have to enable the virtualization extensions in your system BIOS. Make sure this is enabled or KVM will not work.
You can run 32- or 64-bit guests on a 64-bit system, but you can run only 32-bit guests on a 32-bit system. Lots of memory is good, and so are multi-core CPUs. Ubuntu Server is frugal of system resources, which leaves more for your virtual guests. The minimum Ubuntu Server system requirements are a 300 MHz processor, 128 MB RAM, and 1 GB hard drive space. That is very minimal. For testing KVM, I recommend a minimum 2 GHz CPU, 2 GB RAM, and enough disk space for your guest operating systems plus data storage. Provisioning a production server is a bit imprecise. If you just add up the system requirements of all your guests, you'll have an over-provisioned machine, unless your guests run at full-speed all the time. One of the benefits of virtual machines is using hardware more efficiently, because when one guest is idle another one is busy. If you under-provision and your server becomes overloaded, then you can move guests to different hosts. So you have a lot of wiggle room, and don't need to make it perfect from the start.
Install these packages:
$ sudo apt-get install qemu-kvm libvirt-bin virt-manager bridge-utils
Then run this command to make sure it's ready to run KVM:
$ kvm-ok INFO: /dev/kvm exists KVM acceleration can be used
If anything is missing it will tell you KVM acceleration can not be used. Run it with root privileges to get some hints for making it work, like this:
$ sudo kvm-ok [sudo] password for carla: INFO: /dev/kvm does not exist HINT: sudo modprobe kvm_intel INFO: Your CPU supports KVM extensions INFO: KVM (vmx) is disabled by your BIOS HINT: Enter your BIOS setup and enable Virtualization Technology (VT), and then hard poweroff/poweron your system KVM acceleration can NOT be used
Oops. Like I said, make sure it is enabled in your system BIOS. Now add your user to the libvirtd group, then logout and log back in to activate your group membership. Now you can control KVM without permissions hassles. Run this command to verify that KVM is running:
$ virsh -c qemu:///system list Id Name State ----------------------------------
Perfect! The ID Name and State fields are empty, as they should be. Now start the Virtual Machine Manager and connect to your KVM server with this command:
$ virt-manager -c qemu:///system kvmhost
kvmhost is my server name, so you must replace it with your server name. You will be rewarded with something like Figure 1 below.
There's not much going on yet because we haven't installed any guests. You can install new guest operating systems from CD/DVD, ISO images, netinstalls, and PXE boot. CD/DVD installations need an internal drive; USB drives don't work. Installations from ISO images are my favorite because they are fast, and you don't have to burn a disk. For network installations you'll need the URL of your installation server, and for PXE boot a TFTP/PXE boot server.
To install a new guest click the "Create a new virtual machine" button, and follow the screens. (To find an ISO image anywhere on your system, click the Browse button in screen 2, and then look at the bottom left of the "Locate ISO media volume" screen for the Browse Local button. Click this to open a filepicker.) On screen 5 be sure to check "Allocate entire disk now." This doesn't mean it will take over your whole disk, but will reserve all the space you allocated for the guest at once. If you don't select this, then KVM will allocate space as needed, up to the maximum allotted. Disk space is cheap these days, so it's not worth running the risk of data corruption from accidentally running out of room. Reserve the guest's full allotment from the start, and then you won't have to think about it anymore.
The VM window may not be large enough to show the whole screen of your guest, so grab a corner with the cursor to drag it to fit. If the VM captures your mouse pointer press Ctrl+Alt to get it back. Figure 2 shows a normal OpenSUSE installation from an ISO image.
Figure 3 below shows what three guest operating systems look like, all running at the same time and their consoles open, with the Virtual Machine manager on top.
You can control each guest just as though they were installed on separate machines, and start them up and shut them down as you like. Networking is enabled automatically, so your guests can access the Internet and your LAN. Each guest can be modified after installation by opening the guest's console, and then clicking on the blue information button (Figure 4). On this screen you can fine-tune CPU use, memory, view performance graphs, control boot options, setup peripherals, manage storage, and add new hardware devices. By default CD/DVD drives and USB storage devices are not accessible by the guests, so these have to be added manually.
The Virtual Machine Manager makes storage management easy and fast. Create additional storage pools by clicking Edit > Connection Details. This opens a screen with multiple tabs. The Storage tab shows your existing storage pools, and creates new ones. Start in the left pane, and click the green cross to allocate a new block of storage (Figure 5). This can be a directory, block device, SCSI host adapter, network filesystem, LVM group, or an ISCSI target. Then you can divide this up however you like in the right-hand pane. Click the New Volume button, configure its size, and choose either the raw or qcow2 disk image format because these work with all filesystems. raw is the default, and it is the fastest. qcow2 supports AES encryption, snapshots, and compression.
You can migrate guests to a different host for load-balancing, for software or hardware maintenance, or just because you can. There is a prerequisite for enabling migration, and that is your hosts must use shared networked storage, such as NFS shares, Fibre Channel, iSCSI -- whatever it is, both the source and destination host must already be using the same shared network storage pool.
There are two types of migrations: offline and live. In an offline migration the guest is stopped, and then an image of the guest's memory is moved to the new host and restarted. In a live migration, KVM moves the guest's memory pages to the new host, monitors the old host for changes, and transfers these changes to the new host. When the pages are all copied and no changes occur for a configured period of time (10 milliseconds is the default) the guest is stopped on the old host and resumed on the new host. If the old host is busy, a live transfer can take a long time, or not complete at all, so then you'll need to stop it, and do an offline migration instead. Only the contents of the guest's memory are moved; its disk storage is not moved.
Migrating a guest is then a few simple clicks. From the main Virtual Machine Manager console, right-click on the guest you want to move, then left-click Migrate. Check "migrate offline" if you want an offline move. The New Host dropdown menu will list all the available KVM hosts. Select the one you want to use, click the Migrate button, and you're done.
Remote administration, CLI
Virtual Machine Manager supports remote administration. Install it on your favorite workstation or laptop, and then connect to your KVM server with this command:
$ virt-manager -c qemu+ssh://kvmhost/system
Replace kvmhost with your own server's hostname. This tunnels your session securely over SSH, so you'll need an SSH server running on your KVM server.
You may prefer to run your KVM server from the command line, which you can do. Consult the man pages for virt-manager, virsh, and qemu-kvm.
This concludes our crash course, but there's plenty more to learn such as security, the finer points of resource allocation, and best practices. Chapter 19 of the Ubuntu Server Guide is helpful, and the Red Hat Enterprise Linux 6 Virtualization Guide is the most thorough.