1288 Part G Infrastructure and Service Automation
Other systems, aimed at both public and academic
libraries, made their appearance during the 1970s and
1980s. Computer Library Services Inc. (CLSI), Data
Research Associates (DRA), Dynix, GEAC, Innovative
Interfaces Inc. (III), and Sirsi were some of the bet-
ter known. Today only Ex Libris, III, and SirsiDynix
(merged in 2005) survive as major players in the inte-
grated library system arena.
The Harvard University Library epitomizes the vari-
ous stages of library automation on a grand scale. Under
the leadership of the fabled Richard DeGennaro, then
Associate University Librarian for Systems Develop-
ment, Harvard University’s Widener Library keyed and
published its manual shelflist in 60 volumes between
1965 and 1979 [72.14]. At the same time the university
was experimenting with both circulation and acquisi-
tions applications, the latter with the amusing moniker,
computer-assisted ordering system (CAOS), later re-
named the computer-aided processing system (CAPS).
In 1975 Harvard also started to make use of the rela-
tively young OCLC system. As with other institutions,
Harvard initially viewed OCLC as a means to more
efficiently generate catalog cards [72.15].
In 1983 Harvard University decided to obtain the
NOTIS source code from Northwestern University to
unify and coordinate collection development across the
100 libraries that constituted the vast and decentral-
ized Harvard University Library system. The Harvard
system, HOLLIS, served originally as an acquisitions
subsystem. Meantime, the archive tapes of OCLC trans-
actions were being published in microfiche format as
the distributable union catalog (DUC), for the first time
providing distributed access to a portion of the Union
Catalog – a subset of the records created in OCLC.
It was not until 1987 that the catalog master file was
loaded into HOLLIS. In 1988 the HOLLIS OPAC (on-
line public access catalog) debuted, eliminating the
need for the DUC [72.15].
It was, in fact, precisely the combination of the
MARC format, bibliographic utilities, and the emer-
gence of local (integrated) library systems that together
formed the basis for the library information architecture
of the mid to late 1980s. Exploiting the advantages pre-
sented by these building blocks, the decade from 1985
to 1995 witnessed rapid adoption and expansion in the
field of library automation, characterized by maturing
systems and increasing experience in networking. Most
academic and many public libraries had an integrated
library system (ILS) in place by 1990. While the early
ILS systems developed module by module, ILS systems
by this time were truly integrated. Data could finally
be repurposed and reused as it made its way through
the bibliographic lifecycle from the acquisitions mod-
ule to the cataloging module to the circulation module.
Some systems, it is true, required overnight batch jobs
to transfer data from acquisitions to cataloging, but true
integration was becoming more and more the norm.
By the early 1990s libraries were entering the age
of content. Telnet and Gopher clients made possible
the online presence of abstracting and indexing services
(A&I services). Preprint databases arose and libraries
began to mount these as adjuncts to the catalog proper.
DOS-based Telnet clients gave way to Windows-based
Telnet clients. Later in the 1990s, Windows-based
clientsinturngavewaytowebbrowsers.
All thistime, theunderlying bibliographic databases
were largely predicated on current acquisitions. The
key to an all encompassing bibliographic experience
lay in retrospective conversion (Recon) as proven by
Harvard University Library’s fundamental commitment
to Recon. Between 1992 and 1996 Harvard University
added millions of bibliographic records in a concerted
effort to eliminate the need for its thousands of cata-
log card drawers and provide its users with complete
online access to its rich collections. Oxford University
and others soon followed. While it would be incorrect to
say that Recon is a product of a bygone era, most major
libraries do indeed manage the overwhelming propor-
tion of their collection metadata online. This has proven
to be the precursor to the massive digitization projects
underwritten by Google, Yahoo, and Microsoft, all of
which depend to a large degree on the underlying bib-
liographic metadata, much of which was consolidated
during the Recon era.
In the early years of library automation, library
systems and library system vendors moved from time-
sharing, as was evident in the case of BALLOTS,to
large mainframe systems. (Time-sharing involves many
users making use of time slots on a sharedmachine.) Li-
brary system vendors normally allied themselves with
a given hardware provider and specialized in specific
operating system environments. The advent of the Inter-
net, andmore especiallythe World WideWeb, fueledby
the growth of systems based on Unix (and later Linux)
and relational database technology (most notably, Or-
acle) – all combined with the seemingly ubiquitous
personal computer – gave rise to the second genera-
tion in library automation, beginning around 1995. As
noted, the most visible manifestation of that to the li-
brary public was the use of browsers to access the
catalogs. This revolution was followed within 5years
by the rapid and inexorable rise of e-Content. The
Part G 72.2