Издательство Berkshire Publishing Group, 2004, -474 pp.
In a brief history of HCI technology published in 1996, the computer scientist Brad Myers noted that most computer interface technology began as govement-supported research projects in universities and only years later was developed by corporations and transformed into commercial products. He then listed six up-and-coming research areas: natural language and speech, computer-supported cooperative work, virtual and augmented reality, three-dimensional graphics, multimedia, and computer recognition of pen or stylus movements on tablet or pocket computers.
All of these have been very active areas of research or development since he wrote, and several are fundamental to commercial products that have already appeared. For example, many companies now use speech recognition to automate their telephone information services, and hundreds of thousands of people use stylus-controlled pocket computers every day. Many articles in the encyclopedia describe new approaches that may be of tremendous importance in the future.
Our entire perspective on HCI has been evolving rapidly in recent years. In 1997, the National Research Council—a private, nonprofit institution that provides science, technology, and health policy advice under a congressional charter—issued a major report, More Than Screen Deep, to evaluate and suggest fruitful directions for progress in user interfaces to computing and communications systems. This high-level study, sponsored by the National Science Foundation (NSF), concluded with three recommendations to the federal govement and university researchers.
Break away from 1960s technologies and paradigms. Major attempts should be made to find new paradigms for human-machine interaction that employ new modes and media for input and output and that involve new conceptualizations of application interfaces.
Invest in the research required to provide the component subsystems needed for every-citizen interfaces. Research is needed that is aimed at both making technological advances and gaining understanding of the human and organizational capabilities these advances would support. (195)
Encourage research on systems-level design and development of human-machine interfaces that support multiperson, multimachine groups as well as individuals. (196)
In 2002, John M. Carroll looked back on the history of HCI and noted how difficult it was at first to get computer science and engineering to pay attention to issues of hardware and software usability. He argued that HCI was bo as the fusion of four fields (software engineering, software human factors, computer graphics, and cognitive science) and that it continues to be an emerging area in computer science.
The field is expanding in both scope and importance. For example, HCI incorporates more and more from the social sciences as computing becomes increasingly deeply rooted in cooperative work and human communication.
Many universities now have research groups and training programs in HCI. In addition to the designers and engineers who create computer interfaces and the researchers in industry and academia who are developing the fundamental principles for success in such work, a very large number of workers in many industries contribute indirectly to progress in HCI. The nature of computing is constantly changing. The first digital electronic computers, such as ENIAC (completed in 1946), were built to solve military problems, such as calculating ballistic trajectories. The 1950s and 1960s saw a great expansion in military uses and extensive application of digital computers in commerce and industry. In the late 1970s, personal computers entered the home, and in the 1980s they developed more user-friendly interfaces. The 1990s saw the transformation of Inteet into a major medium of communications, culminating in the expansion of the World Wide Web to reach a billion people.
In the first decade of the twenty-first century, two trends are rushing rapidly forward. One is the extension of networking to mobile computers and embedded devices literally everywhere. The other is the convergence of all mass media with computing, such that people listen to music, watch movies, take pictures, make videos, carry on telephone conversations, and conduct many kinds of business on computers or on networks of which computers are central components. To people who are uncomfortable with these trends, it may seem that cyberspace is swallowing real life. To enthusiasts of the technology, it seems that human consciousness is expanding to encompass everything.
The computer revolution is almost certainly going to continue for decades, and specialists in human-computer interaction will face many new challenges in the years to come. At least one other technological revolution is likely to give computer technology an additional powerful boost: nanotechnology. The word comes from a unit for measuring tiny distances, the nanometer, which is one billionth of a meter (one millionth of a millimeter, or one millionth the thickness of a U.S. dime). The very largest single atoms are just under a nanometer in size, and much of the action in chemistry (including fundamental biological processes) occurs in the range between 1 nanometer and 100–200 nanometers. The smallest transistors in experimental computer chips are about 50 nanometers across.
Experts working at the interface between nanotechnology and computing believe that nanoelectronics can support continued rapid improvements in computer speed, memory, and cost for twenty to thirty years, with the possibility of further progress after then by means of integrated design approaches and investment in information infrastructure. Two decades of improvement in computer chips would mean that a desktop personal computer bought in 2024 might have eight thousand times the power of one bought in 2004 for the same price—or could have the same power but cost only twenty cents and fit inside a shirt button. Already, nanotechnology is being used to create networks of sensors that can detect and identify chemical pollutants or biological agents almost instantly. While this technology will first be applied to military defense, it can be adapted to medical or personal uses in just a few years.
The average person’s wristwatch in 2024 could be their mobile computer, telling them everything they might want to know about their environment— where the nearest Thai restaurant can be found, when the next bus will arrive at the coer up the road, whether there is anything in the air the person happens to be allergic to, and, of course, providing any information from the world’s entire database that the person might want to know. If advances in natural- language processing continue at the rate they are progressing today, then the wristwatch could also be a universal translator that allows the person to speak with anyone in any language spoken on the face of the planet. Of course, predictions are always perilous, and it may be that progress will slow down. Progress does not simply happen of its own accord, and the field of human-computer interaction must continue to grow and flourish if computers are to bring the marvelous benefits to human life that they have the potential to bring.
Because the field of HCI is new, the Berkshire Encyclopedia of Human-Computer Interaction breaks new ground. It offers readers up-to-date information about several key aspects of the technology and its human dimensions, including
applications—major tools that serve human needs in particular ways, with distinctive usability issues.
approaches—techniques through which scientists and engineers design and evaluate HCI.
In a brief history of HCI technology published in 1996, the computer scientist Brad Myers noted that most computer interface technology began as govement-supported research projects in universities and only years later was developed by corporations and transformed into commercial products. He then listed six up-and-coming research areas: natural language and speech, computer-supported cooperative work, virtual and augmented reality, three-dimensional graphics, multimedia, and computer recognition of pen or stylus movements on tablet or pocket computers.
All of these have been very active areas of research or development since he wrote, and several are fundamental to commercial products that have already appeared. For example, many companies now use speech recognition to automate their telephone information services, and hundreds of thousands of people use stylus-controlled pocket computers every day. Many articles in the encyclopedia describe new approaches that may be of tremendous importance in the future.
Our entire perspective on HCI has been evolving rapidly in recent years. In 1997, the National Research Council—a private, nonprofit institution that provides science, technology, and health policy advice under a congressional charter—issued a major report, More Than Screen Deep, to evaluate and suggest fruitful directions for progress in user interfaces to computing and communications systems. This high-level study, sponsored by the National Science Foundation (NSF), concluded with three recommendations to the federal govement and university researchers.
Break away from 1960s technologies and paradigms. Major attempts should be made to find new paradigms for human-machine interaction that employ new modes and media for input and output and that involve new conceptualizations of application interfaces.
Invest in the research required to provide the component subsystems needed for every-citizen interfaces. Research is needed that is aimed at both making technological advances and gaining understanding of the human and organizational capabilities these advances would support. (195)
Encourage research on systems-level design and development of human-machine interfaces that support multiperson, multimachine groups as well as individuals. (196)
In 2002, John M. Carroll looked back on the history of HCI and noted how difficult it was at first to get computer science and engineering to pay attention to issues of hardware and software usability. He argued that HCI was bo as the fusion of four fields (software engineering, software human factors, computer graphics, and cognitive science) and that it continues to be an emerging area in computer science.
The field is expanding in both scope and importance. For example, HCI incorporates more and more from the social sciences as computing becomes increasingly deeply rooted in cooperative work and human communication.
Many universities now have research groups and training programs in HCI. In addition to the designers and engineers who create computer interfaces and the researchers in industry and academia who are developing the fundamental principles for success in such work, a very large number of workers in many industries contribute indirectly to progress in HCI. The nature of computing is constantly changing. The first digital electronic computers, such as ENIAC (completed in 1946), were built to solve military problems, such as calculating ballistic trajectories. The 1950s and 1960s saw a great expansion in military uses and extensive application of digital computers in commerce and industry. In the late 1970s, personal computers entered the home, and in the 1980s they developed more user-friendly interfaces. The 1990s saw the transformation of Inteet into a major medium of communications, culminating in the expansion of the World Wide Web to reach a billion people.
In the first decade of the twenty-first century, two trends are rushing rapidly forward. One is the extension of networking to mobile computers and embedded devices literally everywhere. The other is the convergence of all mass media with computing, such that people listen to music, watch movies, take pictures, make videos, carry on telephone conversations, and conduct many kinds of business on computers or on networks of which computers are central components. To people who are uncomfortable with these trends, it may seem that cyberspace is swallowing real life. To enthusiasts of the technology, it seems that human consciousness is expanding to encompass everything.
The computer revolution is almost certainly going to continue for decades, and specialists in human-computer interaction will face many new challenges in the years to come. At least one other technological revolution is likely to give computer technology an additional powerful boost: nanotechnology. The word comes from a unit for measuring tiny distances, the nanometer, which is one billionth of a meter (one millionth of a millimeter, or one millionth the thickness of a U.S. dime). The very largest single atoms are just under a nanometer in size, and much of the action in chemistry (including fundamental biological processes) occurs in the range between 1 nanometer and 100–200 nanometers. The smallest transistors in experimental computer chips are about 50 nanometers across.
Experts working at the interface between nanotechnology and computing believe that nanoelectronics can support continued rapid improvements in computer speed, memory, and cost for twenty to thirty years, with the possibility of further progress after then by means of integrated design approaches and investment in information infrastructure. Two decades of improvement in computer chips would mean that a desktop personal computer bought in 2024 might have eight thousand times the power of one bought in 2004 for the same price—or could have the same power but cost only twenty cents and fit inside a shirt button. Already, nanotechnology is being used to create networks of sensors that can detect and identify chemical pollutants or biological agents almost instantly. While this technology will first be applied to military defense, it can be adapted to medical or personal uses in just a few years.
The average person’s wristwatch in 2024 could be their mobile computer, telling them everything they might want to know about their environment— where the nearest Thai restaurant can be found, when the next bus will arrive at the coer up the road, whether there is anything in the air the person happens to be allergic to, and, of course, providing any information from the world’s entire database that the person might want to know. If advances in natural- language processing continue at the rate they are progressing today, then the wristwatch could also be a universal translator that allows the person to speak with anyone in any language spoken on the face of the planet. Of course, predictions are always perilous, and it may be that progress will slow down. Progress does not simply happen of its own accord, and the field of human-computer interaction must continue to grow and flourish if computers are to bring the marvelous benefits to human life that they have the potential to bring.
Because the field of HCI is new, the Berkshire Encyclopedia of Human-Computer Interaction breaks new ground. It offers readers up-to-date information about several key aspects of the technology and its human dimensions, including
applications—major tools that serve human needs in particular ways, with distinctive usability issues.
approaches—techniques through which scientists and engineers design and evaluate HCI.