32 ■ Chapter One Fundamentals of Computer Design
In Section 1.11, we describe pitfalls that have occurred in developing the
SPEC benchmark suite, as well as the challenges in maintaining a useful and pre-
dictive benchmark suite. Although SPEC CPU2006 is aimed at processor perfor-
mance, SPEC also has benchmarks for graphics and Java.
Server Benchmarks
Just as servers have multiple functions, so there are multiple types of bench-
marks. The simplest benchmark is perhaps a processor throughput-oriented
benchmark. SPEC CPU2000 uses the SPEC CPU benchmarks to construct a sim-
ple throughput benchmark where the processing rate of a multiprocessor can be
measured by running multiple copies (usually as many as there are processors) of
each SPEC CPU benchmark and converting the CPU time into a rate. This leads
to a measurement called the SPECrate.
Other than SPECrate, most server applications and benchmarks have signifi-
cant I/O activity arising from either disk or network traffic, including benchmarks
for file server systems, for Web servers, and for database and transaction-
processing systems. SPEC offers both a file server benchmark (SPECSFS) and a
Web server benchmark (SPECWeb). SPECSFS is a benchmark for measuring
NFS (Network File System) performance using a script of file server requests; it
tests the performance of the I/O system (both disk and network I/O) as well as the
processor. SPECSFS is a throughput-oriented benchmark but with important
response time requirements. (Chapter 6 discusses some file and I/O system
benchmarks in detail.) SPECWeb is a Web server benchmark that simulates mul-
tiple clients requesting both static and dynamic pages from a server, as well as
clients posting data to the server.
Transaction-processing (TP) benchmarks measure the ability of a system to
handle transactions, which consist of database accesses and updates. Airline res-
ervation systems and bank ATM systems are typical simple examples of TP;
more sophisticated TP systems involve complex databases and decision-making.
In the mid-1980s, a group of concerned engineers formed the vendor-indepen-
dent Transaction Processing Council (TPC) to try to create realistic and fair
benchmarks for TP. The TPC benchmarks are described at www.tpc.org.
The first TPC benchmark, TPC-A, was published in 1985 and has since been
replaced and enhanced by several different benchmarks. TPC-C, initially created
in 1992, simulates a complex query environment. TPC-H models ad hoc decision
support—the queries are unrelated and knowledge of past queries cannot be used
to optimize future queries. TPC-W is a transactional Web benchmark. The work-
load is performed in a controlled Internet commerce environment that simulates
the activities of a business-oriented transactional Web server. The most recent is
TPC-App, an application server and Web services benchmark. The workload
simulates the activities of a business-to-business transactional application server
operating in a 24x7 environment.
All the TPC benchmarks measure performance in transactions per second. In
addition, they include a response time requirement, so that throughput perfor-