5.4 Protection: Virtual Memory and Virtual Machines ■ 321
another comes from supporting the myriad of device drivers that are required, espe-
cially if different guest OSes are supported on the same VM system. The VM illu-
sion can be maintained by giving each VM generic versions of each type of I/O
device driver, and then leaving it to the VMM to handle real I/O.
The method for mapping a virtual to physical I/O device depends on the type
of device. For example, physical disks are normally partitioned by the VMM to
create virtual disks for guest VMs, and the VMM maintains the mapping of vir-
tual tracks and sectors to the physical ones. Network interfaces are often shared
between VMs in very short time slices, and the job of the VMM is to keep track
of messages for the virtual network addresses to ensure that guest VMs receive
only messages intended for them.
An Example VMM: The Xen Virtual Machine
Early in the development of VMs, a number of inefficiencies became apparent.
For example, a guest OS manages its virtual to real page mapping, but this map-
ping is ignored by the VMM, which performs the actual mapping to physical
pages. In other words, a significant amount of wasted effort is expended just to
keep the guest OS happy. To reduce such inefficiencies, VMM developers
decided that it may be worthwhile to allow the guest OS to be aware that it is run-
ning on a VM. For example, a guest OS could assume a real memory as large as
its virtual memory so that no memory management is required by the guest OS.
Allowing small modifications to the guest OS to simplify virtualization is
referred to as paravirtualization, and the open source Xen VMM is a good exam-
ple. The Xen VMM provides a guest OS with a virtual machine abstraction that is
similar to the physical hardware, but it drops many of the troublesome pieces. For
example, to avoid flushing the TLB, Xen maps itself into the upper 64 MB of the
address space of each VM. It allows the guest OS to allocate pages, just checking
to be sure it does not violate protection restrictions. To protect the guest OS from
the user programs in the VM, Xen takes advantage of the four protection levels
available in the 80x86. The Xen VMM runs at the highest privilege level (0), the
guest OS runs at the next level (1), and the applications run at the lowest privilege
level (3). Most OSes for the 80x86 keep everything at privilege levels 0 or 3.
For subsetting to work properly, Xen modifies the guest OS to not use prob-
lematic portions of the architecture. For example, the port of Linux to Xen
changed about 3000 lines, or about 1% of the 80x86-specific code. These
changes, however, do not affect the application-binary interfaces of the guest OS.
To simplify the I/O challenge of VMs, Xen recently assigned privileged vir-
tual machines to each hardware I/O device. These special VMs are called driver
domains. (Xen calls its VMs “domains.”) Driver domains run the physical device
drivers, although interrupts are still handled by the VMM before being sent to the
appropriate driver domain. Regular VMs, called guest domains, run simple vir-
tual device drivers that must communicate with the physical device drivers in the
driver domains over a channel to access the physical I/O hardware. Data are sent
between guest and driver domains by page remapping.