Automation Under Service-Oriented Grids 24.2 Virtualization 407
bling numbers on a note pad, or a computer running
a program. One of the drawbacks ofthe monkey method
is the time it takes to arrive at a solution [24.2]. The
human or manual method is also too slow for most prob-
lems of practical size today, which involve millions or
billions of records.
Only machines are capable of addressing the scale
of complexity of most enterprise computational prob-
lems today. In fact, their performance has progressed to
such an extent that, evenwith large computationaltasks,
they are idle most of the time. This is one of the reasons
for the low utilization rates for servers in data centers
today. Meanwhile, this infrastructure represents a sunk
cost, whether fully utilized or not.
Modern microprocessor-based computers have so
much reserve capacity that they can be used to simulate
other computers. This is the essence of virtualization:
the use of computers to run programs that simulate
computers of the same or even different architectures.
In fact, machines today can be used to simulate many
computers, anywhere between 1 and 30 for practical
situations. If a certain machine shows a load factor of
5% when running a certain application, that machine
can easily run ten virtualized instances of the same
application.
Likewise, three machines of a three-tier e-com-
merce application can be run in a single physical
machine, including a simulation of the network linking
the three machines.
Virtual computers, when compared with real phys-
ical computers, can pass the Turing test much more
easily than when humans are compared to a machine.
There is essentially no difference between results of
computations in a physical machine versus the compu-
tation in a virtual machine. It may take a little longer,
but the results will be identical to the last bit. Whether
running in a physical or a virtualized host, an applica-
tion program goes through the same state transitions,
and eventually presents the same results.
As we saw, virtualization is the creation of substi-
tutes for real resources. These substitutes have the same
functions and external interfaces as their counterparts,
but differ in attributes, such as size, performance, and
cost. These substitutes are called virtual resources.Be-
cause the computational results are identical, users are
typically unaware of the substitution. As mentioned,
with virtualization we can make one physical resource
look like multiple virtual resources; we can also make
multiple physical resources into shared pools of virtual
resources, providing a convenient way of divvying up
a physical resource into multiple logical resources.
In fact, theconcept of virtualization has been around
for a long time. Back in the mainframe days, we
used to have virtual processes, virtual devices, and
virtual memory [24.3–5]. We use virtual memory in
most operating systems today. With virtual memory,
computer software gains access to more memory than
is physically installed, via the background swapping
of data to disk storage. Similarly, virtualization con-
cepts can be applied to other IT infrastructure layers
including networks, storage, laptop or server hard-
ware, operating systems, and applications. Even the
notion of process is essentially an abstraction for a vir-
tual central processing unit (CPU) running a single
application.
Virtualization on x86 microprocessor-based sys-
tems is a more recent development in the long history
of virtualization. This entire sector owes its existence
to a single company, VMware; and in particular, to
founder Rosenblum [24.6], a professor of operating
systems at Stanford University. Rosenblum devised
an intricate series of software workarounds to over-
come certain intrinsic limitations of the x86 instruction
set architecture in the support of virtual machines.
These workarounds became the basis for VMware’s
early products. More recently, native support for vir-
tualization hypervisors and virtual machines has been
developed to improve the performance and stability of
virtualization. An example is Intel’s virtualization tech-
nology (VTx) [24.7].
To look further into the impact of virtualization
on a particular platform, Fig. 24.2 illustrates a typi-
cal configuration of a single operating system (OS)
platform without virtual machines (VMs) and a config-
uration of multiple virtual machines with virtualization.
As indicated in the chart on the right, a new layer
of abstraction is added, the virtual machine monitor
(VMM), between physical resources and virtual re-
sources. A VMM presents each VM on top of its virtual
resources and maps virtual machine operations to phys-
ical resources. VMMs can be designed to be tightly
coupled with operating systems or can be agnostic to
operating systems. The latter approach provides cus-
tomers with the capability to implement an OS-neutral
management infrastructure.
24.2.1 Virtualization Usage Models
Virtualization is not just about increasing load fac-
tors; it brings a new level of operational flexibility and
convenience to the hardware that was previously asso-
ciated with software only. Virtualization allows running
Part C 24.2