432 CHAPTER 11 NETWORK DESIGN
it simple to add servers (or remove servers) without affecting users. You simply add or
remove the server(s) and change the software configuration in the load balancing switch;
no one is aware of the change.
Server Virtualization Server virtualization is somewhat the opposite of server farms
and load balancing. Server virtualization is the process of creating several logically
separate servers (e.g., a Web server, an email server, a file server) on the same physical
computer. The virtual servers run on the same physical computer, but appear completely
separate to the network (and if one crashes it does not affect the others running on the
same computer).
Over time, many firms have installed new servers to support new projects, only to
find that the new server was not fully used; the server might only be running at 10 percent
of its capacity and sitting idle for the rest of the time. One underutilized server is not a
problem. But imagine if 20 to 30 percent of a company’s servers are underutilized. The
company has spent too much money to acquire the servers, and, more importantly, is
continuing to spend money to monitor, manage, and update the underused servers. Even
the space and power used by having many separate computers can noticeably increase
operating costs. Server virtualization enables firms to save money by reducing the number
of physical servers they buy and operate, while still providing all the benefits of having
logically separate devices and operating systems.
Some operating systems enable virtualization natively, which means that it is easy
to configure and run separate virtual servers. In other cases, special purpose virtualization
software (e.g., VMware) is installed on the server and sits between the hardware and the
operating systems; this software means that several different operating systems (e.g.,
Windows, Mac, Linux) could be installed on the same physical computer.
Capacity Management Most network traffic today is hard to predict. Users choose
to download large software or audio files or have instant messenger voice chats. In many
networks, there is greater capacity within a LAN than there is leading out of the LAN
into the backbone or to the Internet. In Figure 11.5, for example, the building backbone
has a capacity of 1 Gbps, which is also the capacity of just one LAN connected to it
(2 East). If one user in this LAN generates traffic at the full capacity of this LAN, then
the entire backbone will become congested, affecting users in all other LANs.
Capacity management devices, sometimes called bandwidth limiters or band-
width shapers, monitor traffic and can act to slow down traffic from users who consume
too much capacity. These devices are installed at key points in the network, such as
between a switch serving a LAN and the backbone it connects into, and are configured
to allocate capacity based on the IP address of the source (or its data link address) as
well as the application in use. The device could, for example, permit a given user to
generate a high amount of traffic for an approved use, but limit capacity for an unofficial
use such as MP3 files. Figure 11.12 shows the control panel for one device made by
NetEqualizer.
11.5.4 Minimizing Network Traffic
Most approaches to improving network performance attempt to maximize the speed at
which the network can move the traffic it receives. The opposite—and equally effective