Discretion in Human-Computer Interaction



Download 51.12 Kb.
Date28.01.2017
Size51.12 Kb.
#8843
Discretion in Human-Computer Interaction

Jonathan Grudin

Microsoft Research

One Microsoft Way, Redmond, WA 98052 USA

jgrudin@microsoft.com





ABSTRACT


This paper introduces a perspective on the history of research and development in human-computer interaction. Close examination and analysis of the distinction between discretionary and non-discretionary use provides a rich view of HCI history, that contrasts and adds to previous chronological descriptions. A shift from non-discretionary to discretionary use of technology has generally been noted to build and win the case for greater attention to usability, but there is more to it. Non-discretionary use was a substantial proportion of hands-on use in the early days, remains significant in the present, and may in a different guise become more significant in the future. Discretionary use became the dominant focus in the 1980s and 1990s, but our approach to addressing it is still evolving. It is useful to consider these forms of use as distinct parallel historical threads of research and development.

Author Keywords


history, discretion, performance, preference, choice

ACM Classification Keywords


H5.m. Information interfaces and presentation (e.g., HCI): Miscellaneous. K2 History of computing: Systems

INTRODUCTION


To use a technology or not to use it: Sometimes we have a choice, other times we don’t. When I need an answer by phone and no human operator is available, I must wrestle with speech recognition and routing systems. In contrast, my home computer use is entirely discretionary. My workplace lies in-between: Technologies are recommended or even prescribed, but I can ignore or obtain exceptions in some cases, use some features but not others, or join with colleagues to advocate changes in policy or availability.

The field of human-computer interaction has taken shape over half a century. For the first half of the computer era, almost all use was non-discretionary. Since then, breathtaking hardware innovation, more versatile software, and steady progress in understanding the psychology of users and contexts of use has led to greater choice. Rising expectations also play a role—people know that software is flexible and expect it to adjust to them rather than vice versa. And greater competition among vendors leads to alternatives. Today most use is relatively discretionary.

The rise in discretionary use was noted a quarter century ago by Bennett [3], cited in Shackel’s excellent review of the early years [33]. To early observers, the significance of discretionary use was that it necessitated a greater focus on usability. This perspective is now widely accepted.

However, this complex issue merits a closer look. A strong emphasis on non-discretionary use has continued in some quarters. Discretion may be curtailed as more work and interaction come to be conducted digitally. And as noted above, even in our age of specialization, customization, and competition, the exercise of choice varies from one moment to the next and one context to another.

To illustrate the role of discretion, consider two extensively researched interface technologies: speech recognition and natural language understanding. They are useful in many non-discretionary use situations: when a phone system provides no alternative, when a disability such as repetitive stress injury (RSI) limits keyboard use, when hands are otherwise occupied—an aircraft pilot, perhaps. Speech recognition and natural language understanding can also be useful for people whose central tasks they directly support, such as professional translators. But after a decade of availability on personal computers, people do not use speech recognition when they have a choice.

Discretion is only one variable of interest in understanding human-computer interaction, but it helped shape where we are and how we got here.

This look at the history of HCI begins with several puzzles. How should we contrast the contributions of the inspirational writers and prototype builders of the early years of human-computer interaction and the human factors community that preceded them? What explains the staggering expenditures on recognition and language technologies that have little acceptance in the marketplace and little presence at CHI? Why were GOMS and its progeny a crest of achievement in the mid-80s that then largely receded from view? Why haven’t more NSF and DARPA HCI Program Directors come from the CHI community? Does the growing emphasis on design in HCI simply reflect greater availability of cycles and storage? Was it overlooked earlier, is it a mistake to focus on it now?

This paper outlines answers to these important questions. CHI readers may find the brief review of history familiar, but it is a fresh organized: from the perspective of the degree of choice exercised by system users. This framework illuminates our field’s ongoing evolution and may suggest areas on which to focus more attention.


Two Foundations of HCI


The history of human-computer interaction includes the evolution of widespread practices. It also includes people, concepts, and advances in understanding that inspired new developments. Often decades elapse between visions or working demonstrations of concepts and their widespread realization. We briefly review work on improving work practice and technology use that preceded the computer, then credit the visionaries who inspired many in our field, which can be understood in terms of existing practices, new visions, and hardware that became substantially more powerful year after year. Those familiar with this history may wish to skip this section; for more detail see [1].

Taylorism and human factors before computers


Over the centuries, extraordinarily specialized tools were developed to support carpenters, blacksmiths, and other artisans. But efforts to apply science and engineering to improving work practice arose only a century ago. These exploited recent inventions including motion pictures and statistical analysis. Taylor’s ‘principles of scientific management’ [39], published in 1911, proved to have limitations, but such techniques were applied to assembly line manufacturing and other work practices in subsequent decades. World War I brought a related focus to Europe.

World War II accelerated ‘behavioral engineering’ as complex new weaponry tested human capabilities. One basic design flaw could result in thousands of casualties [8]. As a result, the war effort produced, in addition to the first digital computers, a lasting interest in human factors or ergonomics in design and training.

These approaches to improving work and the ‘man-machine interface’ focused on the non-discretionary use of technology. The assembly line worker was hired to use the system. The soldier was given the equipment. They had no choice in the matter. If training was necessary, they were trained. The goals of workplace study and technology improvement included reducing errors in operation, increasing the speed of operation, reducing training time, and so forth. And when use is mandatory, even small improvements help.

Early visions and demonstrations


In 1958, transistor-based computers appeared. Until then, visions existed mainly in the realm of science fiction, because vacuum tube computers had severe practical limitations. The most influential early vision was Vannevar Bush’s 1945 essay ‘As We May Think’ [4]. Bush sketched an unrealistic mechanical device that anticipated many capabilities of computers. Bush also exerted an important influence in the practical sphere through his role in promoting government funding of scientific research.

The1960s saw the key writings and prototypes of the influential HCI pioneers. Licklider’s prototypes identified requirements for interactive systems. He accurately predicted which would prove easier—such as visual displays—and more difficult—such as natural language understanding [18,19]. McCarthy and Strachey proposed time-sharing systems, crucial to the spread of interactive computing [11]. Sutherland’s Sketchpad demonstrated copying, moving, and deleting of hierarchically organized objects, constraints, iconic representations, and some concepts of object-oriented programming [38]. Engelbart formulated a broad vision, created the foundations of word processing, invented the mouse and other input devices, and conducted astonishing demonstrations of distributed computing that integrated text, graphics, and video [9, 10]. Nelson’s vision of a highly interconnected network of digital objects foreshadowed aspects of web, blog, and wiki technologies [22, 23]. Rounding out this period were Kay’s ‘reactive engine’ and ‘dynabook’ visions of personal computing based on a versatile digital notebook [16, 17].

These visions focused on the discretionary use of technology. Writings were titled ‘man-computer symbiosis,’ ‘augmenting human intellect,’ and ‘a conceptual framework for man-machine everything.’ Technology would empower individuals to work and interact more effectively and flexibly. These liberating images of computer use inspired researchers and programmers to work for the decades needed to realize and refine them. Some of the capabilities that they anticipated are now taken for granted, others remain elusive.

Mandatory or discretionary use? Real life lies somewhere between the assembly line nightmare satirized in Charlie Chaplin’s Modern Times and utopian visions of completely empowered individuals. In supporting technology use in situations of more or less choice, some similar issues arise, along with differences. It is surprising to discover that for twenty years, two or three efforts have gone on almost in parallel and with very little communication.


The first forty years


For three decades, most hands-on computer users were computer operators. Human-computer interaction in that period could be more broadly construed to comprise writing and running programs and reading printed output.

The first computer builders did everything themselves, but a division of labor soon emerged, separating computer use into three categories:



Operators, who interacted directly with a computer: maintaining it, loading and running programs, filing printouts, and so on.

Programmers, a step removed from the physical device. They might leave a ‘job’ in the form of punched cards to be run at a computer center, picking up the cards and a printout the next day.

Users, who specified and used a program’s output, a printout or report. They, too, did not interact directly with the computer.

Supporting non-discretionary use by computer operators—the extension of ergonomics


“In the beginning, the computer was so costly that it had to be kept gainfully occupied for every second; people were almost slaves to feed it.” — Brian Shackel [33]

For half of the computer era, improving the experience of hands-on users meant supporting low-paid computer operators. An operator handled a computer as it was, setting switches, pushing buttons, reading lights, feeding and bursting printer paper, loading and unloading cards, magnetic tapes, and paper tapes, and so on.

Teletypes were the first versatile mode of direct interaction. Operators typed commands, the computer printed responses or spontaneous status messages. The paper printout scrolled up, one line at a time.

Displays (‘VDUs’ or ‘VDTs’ for visual display units or terminals, ‘CRTs’ for cathode ray tubes) were at first nicknamed ‘glass ttys’, glass teletypes, because they functioned much the same as teletypes, displaying and scrolling up typed operator commands, computer-generated responses, and status messages. Most were monochrome and restricted to alphanumeric characters. The first primitive displays marketed commercially cost around $50,000 in today’s dollars. Expensive, but a small fraction of the cost of a business computer. Typically one console accompanied a computer for use by an operator.

Improving display design was a natural extension of traditional human factors or ergonomics. Brian Shackel published such work starting in 1959, titled ‘Ergonomics for a computer’ [31] and ‘Ergonomics in the design of a large digital computer console’ [32]. Little followed for a decade. In 1970 Shackel’s HUSAT research center formed, contributing to HCI although focused broadly on general ergonomics. The first influential book was Martin’s 1973 Design of man-computer dialogues [21]. In 1980, when 5 major HCI books were published, two focused on VDT design and one on general ergonomic guidelines [33]. At that time Germany adopted VDT standards, disqualifying some U.S. models. With that, designing for human capabilities became a visible economic issue.

1980 was also the year Card, Moran and Newell’s ‘Keystroke-level model for user performance time with interactive systems’ was published [5]. “The central idea behind the model is that the time for an expert to do a task on an interactive system is determined by the time it takes to do the keystrokes.” This model and successors such as GOMS were used to help quintessentially non-discretionary users such as telephone operators, people engaged in repetitive tasks involving little reflection.

A series of ergonomically-justified interface guidelines culminated in 1986 with the publication of Smith and Mosier’s 944 guidelines [37]. Sections were titled ‘Data Entry,’ ‘Data Display,’ ‘Data Transmission,’ ‘Data Protection,’ ‘Sequence Control,’ and ‘User Guidance.’ The emphasis was on supporting operators. Graphical interfaces, then a new arrival, were mentioned, but the major shift and expansion of the design space ushered in by GUIs may have been a factor in discontinuing this guideline effort.

By then, change was rippling through the industry. Mainframes and batch processing still dominated, but timesharing allowed new uses, minicomputers were spreading, and microcomputers starting to appear. Hands-on computing was becoming available to people who were not computer professionals, who would only use technology if it helped them do their job better.

Improving the life of discretionary users had a prior history. In the visions of Bush and the others, of course. But also, through the 1960s and especially the 1970s, in support of the other two categories of computer user, programmers and managerial users of the output.

Supporting discretionary use by computer programmers—the origin of CHI


Early programmers used a computer directly when they could, because it was fun and faster. But the cost of computers largely dictated the division of labor noted above. Working as a programmer in the mid-70s, even for a computer company, typically meant writing programs on paper that were then punched onto cards by keypunch operators. The jobs were run by computer operators and the programmer received printed output.

Programmers worked on scientific systems, business systems, and on advancing computer technology, many of the latter working with or inspired by the visionary writers and prototype builders of the 1960s. In 1970 Xerox PARC was founded. It contrasted in an interesting way with HUSAT, founded the same year. HUSAT was focused on ergonomics or human factors, one component of which became the ergonomics of computing. PARC was focused on computing, one component of which became the human factors of computing. In 1971 Allen Newell proposed a PARC project, launched in 1974: “Central to the activities of computing—programming, debugging, etc.—are tasks that appear to be within the scope of this emerging theory (psychology of cognitive behavior).”[6]

Thousands of papers were published in the 1970s on the psychology and performance of programmers. Weinberg published The psychology of computer programming in 1971 [41]. In 1980, Shneiderman summarized this research in Software psychology [35]. (This was the same year three books on VDT design and ergonomics were published.) In 1981, Sheil covered studies of programming notation (e.g., conditionals, control flow, data types), programming practices (flowcharting, indenting, variable naming, commenting), programming tasks (learning, coding, debugging), and included a section on experimental design and statistical analysis. [34]

With the spread of timesharing and minicomputers in the late 1970s and early 1980s, many programmers became enthusiastic hands-on users. As programmers became hands-on users of tools, ongoing studies of programmers became studies of hands-on users. When the personal computing era of the early 80s arrived, the same methods shifted to the study of other discretionary users.

A fifth book published in 1980 foreshadowed the shift: Smith and Green [36] devoted one-third to research on programming and two-thirds to designing for “non-specialist people,” by which they meant not computer specialists. The preface to this straightforward treatise echoes the decades-old visions of discretionary use: “It’s not enough just to establish what people can and cannot do; we need to spend just as much effort establishing what people can and want to do.” (emphasis in the original)

Another effort to bridge from programmer to other professionals appeared in John Gould’s group at IBM Watson Labs. Similar to the PARC applied psychology group, it evolved through the 70s and 80s to a more cognitive focus from one that included perceptual-motor studies and operator support. “One of the main themes of the early work was basically that we in IBM were afraid that the market for computing would be limited by the number of people who could program complex systems so we basically wanted to find ways for ‘non-programmers’ to be able, essentially, to program.” [40]

Many key participants in early CHI and INTERACT conferences had studied psychology of programming, including Ruven Brooks, Bill Curtis, Thomas Green, and Ben Shniederman. Papers on programmers as users were initially a substantial focus, then gradually disappeared.

Other factors contributed to a sense that this was a new undertaking, not tied to human factors work on operators. Addressable and graphic displays dropped in price and became widely used in the late 1970s, opening a large, challenging design space. In the U.S., academic hiring of cognitive psychology PhDs fell sharply in the late 1970s; computer and telecommunication companies hired many of them to tackle perceptual and cognitive design issues.

Thus, the CHI focus on discretionary use drew heavily from software psychology, cognitive psychologists, and sympathetic computer programmers and computer scientists. They were largely independent of human factors engineers studying operators. There was some cross-publication between human factors and human-computer interaction, but the endeavors remained distinct.

In the UK and Europe, computer companies exerted less influence and research boundaries were less distinct. HUSAT research took on a broad range. The Medical Research Council Applied Psychology Unit (MRC APU), renowned for theoretically driven human factors research, was funded by IBM from 1977 to work on HCI issues with a discretionary use focus.

There was some tension between the camps. The human factors/ergonomics community felt their past work was not fully appreciated. And although methods and goals overlapped, agendas differed. A 1984 study contrasting performance and preference found evidence that users might prefer an interaction technique that is congenial but does not maximize performance. Of clear interest to the study of discretionary use, the result was seen by the other group as threatening the mission of maximizing performance[15].

Supporting discretionary managerial use—the contribution of MIS


Expensive business computers were bought to address major organizational concerns. Sometimes the prestige of having an air-conditioned glass-walled computer room justified the expenditure [14], but most computers were put to work. Most output was routed to managers and executives. In the field variously called data processing (DP), management information systems (MIS), information systems (IS), and information technology (IT), ‘users’ meant these managers, who like early programmers were well-paid, discretionary, and not hands-on.

Supporting managerial use meant improving the visual display of information, primarily on paper but eventually on displays as well. Because much of the output was quantitative, this included the design of tables, charts, and graphs – ‘business graphics’ was an application area and the focus of much usability work in the MIS field.

Work on the visual display of information was not restricted to this community. Operators used manuals and programmers relied on flowcharts and printouts. Research on the design of information was a focus of human factors and the psychology of programming. Substantial research on this topic was conducted at the MRC APU and IBM Watson in the 1970s and 1980s.

Until the late 1990s, most managers exercised discretion by avoiding hand-on computer use, whereas by 1985 almost all programmers were hands-on users. Managers still delegate much of their interaction with technology, but with most now hands-on users of some software, not surprisingly interest in HCI in the management discipline is growing.


Government role in system use and research funding


Government, the U.S. government in particular, was the major purchaser of computers in the decades in which computer ‘operation’ was the norm. In addition to computer operators, governments employed vast numbers of data entry personnel and other non-discretionary users. This meshed naturally with the focus on designing to fit human capabilities that arose in the World Wars.

Acquisition through competitively bid contracts (especially in the U.S.) presented challenges for government acquisition of novel interactive systems. The customer had to remain at arms length from the developer, so needed to find ways to specify the product in advance. This led to government participation in establishing ergonomic standards in the late 1970s and 1980s. Compliance could be specified in the contract. They also promoted interface design guidelines: the six-year guideline development mentioned above [37] was funded by the U.S. Air Force.

Computer use has spread through society, but societal change is slower. Government agencies were early adopters of computers, but the work conducted in such agencies changes only gradually. Computer operators are no longer employed in large numbers, but huge numbers of data entry and handling personnel remain at agencies such as census, tax, health and welfare. Power plant operators and air traffic controllers are glued to systems that evolve very gradually. Ground control operations for space launches require highly trained users. Soldiers and military logistics require training on equipment, and weaponry grows ever more complex. The quantity of text and voice intercepts to be processed by intelligence agencies is immense. Government remains a huge employer of non-discretionary users of computer systems. Improving these systems is a central concern. Small efficiency gains can provide large benefits.

Government is not only a customer, it is a major funding source for information technology research. In Europe, National and EU initiatives have been the principle source of funds. The Japanese government has funded major initiatives with HCI components, such as the Fifth Generation project. Since World War II, the U.S. National Science Foundation, armed services (led by proponents of basic research in the Office of Naval Research), and intelligence agencies have been major sources of research funding, although corporate research laboratories established by telecommunication, hardware, and software companies have been equally prominent since the 1970s.

Given that non-discretionary direct use prevailed when government research funding began and remains high in government agencies today, it is not surprising that U.S., government funding tends to support non-discretionary use, despite the broader shift toward discretionary use. Few of the several past Program Directors for the Human-Computer Interaction Program attended CHI conferences and only one ever presented there. A recent public NSF-sponsored review of the HCI Program noted:

“In reviewing HCI Program coverage we consulted the on-line HCI Bibliography (www.hcibib.org). This heavily-used (over one million searches) public index of over 24,000 records covers the full contents of 14 journals, 20 conferences, books and other materials. It lists 506 authors with ten or more publications. No PI for the 10 randomly selected FY1999-FY2002 HCI Program awards is on this list… HCI program grants are not fully reflective of the HCI literature…” [29]

What is the focus of NSF funding in human-computer interaction? This is the description of the program, then called Interactive Systems, in 1993:

“The Interactive Systems Program considers scientific and engineering research oriented toward the enhancement of human-computer communications and interactions in all modalities. These modalities include speech/language, sound, images and, in general, any single or multiple, sequential or concurrent, human-computer input, output, or action.” [28]

By far the greatest emphasis over the years has been speech recognition and natural language understanding. As noted in the introduction, this has value primarily in non-discretionary use situations. Little appears in CHI, although some continue to believe that NLU/SR will some day be more useful in discretionary situations. NSF has also funded substantial work on using brainwaves to guide computer displays, another technology that may have its uses, but probably not in most of our homes and offices. (Specialized conferences and the HCI International journal and conference series are outlets for this research.)

In conclusion, we find that the two threads of HCI have been strongly represented in the way research is funded. The distinction between the efforts may be more pronounced in the US than elsewhere, and even in the US it is not total: government funding for the ARPANET (later Internet), which appeared in 1969, for example.


Corporate/academic role in system use and research


We examined government as user and government as guiding research. Similarly, other organizations collectively fill these two roles. Organizations employ the information workers whose growing discretion has been discussed, and a subset of these organizations—technology companies and a variety of academic disciplines—research, develop and market interactive systems.

Few computer and software vendors that focused on non-discretionary use in the mainframe and minicomputer eras still do today. Most major vendors that thrived then are gone today, and the few that remain (IBM comes to mind) reformed themselves during the transition to discretionary use of computers in the 1980s. Most companies that are now active in human-computer interaction research and innovation came into prominence in the 1980s and 1990s, by which time it was a good commercial strategy to appeal to users who exercised discretion, through individual purchasers or by influencing organizational acquisition.

HCI practitioners in this environment began with methods and perspectives developed before, much as software engineers in this era inherited waterfall models designed to solve different problems. Traditional ergonomic goals of fewer errors, faster performance, quicker learning, greater memorability, and aesthetic appeal were still important, but relative priorities differ.

For a time, ‘friendly’ interfaces were considered frills; the Macintosh made inroads into business mainly where its handling of multimedia was critical. But eventually people asked, “why shouldn’t my expensive office system be as congenial as less expensive equipment used at home?”

Also, as software grew more complex and more of it appeared, training on each application became less feasible, and ease of learning more significant.

The greatest difference, though, was the need to seduce discretionary users. Looking at how the HCI field has evolved, we find that it has increasingly taken into consideration issues that are important to people with a choice. Design aesthetics, not a major issue for the data entry operator, have increasingly played a larger role in CHI. The significance of marketing has come to be recognized as an important element in human-computer interaction, rather than a distraction.

CHI has over the years created new conference series Designing Interactive Systems (DIS) and Designing for User Experiences (DUX) reflecting this shift. The evolution of focus in HCI is symbolized by the work of Don Norman. In the first paper presented at the first CHI conference, “design principles for human-computer interfaces,” [26] he focused on tradeoffs among the attributes speed of use, prior knowledge required, ease of learning, and errors. Twenty years later, in 2003, he published a book titled Emotional design. [27]

Summary observations


As noted earlier, choice is not all or none, it varies with the context. More at home than at work, perhaps. A lot when selecting online purchases, none when confronted with a telephone answering system. Considerable when young and healthy, little when afflicted by aging processes or RSI. The dichotomies outlined above are not razor sharp, yet they describe real differences with significant consequences.

Air traffic controllers, pilots, astronauts, soldiers, data entry personnel, telephone receptionists, and others often must use the technology at hand or change their occupation. Design priorities vary. For nuclear power plant controls, reducing errors is critical, increasing efficiency is good, but aesthetic appeal or reduced training cost may not be concerns. Language translators or intelligence analysts may find speech recognition and language understanding technologies difficult and error-prone, but if the tools increase productivity they are used. On the other hand, consumers may respond to visceral design appeal or effective marketing at the expense of usability and utility.


Looking forward


This examination of human-computer interaction history brings into focus the origins of distinctions found today. Trajectories of past change can illuminate future evolution. However, change may not be linear. There are indications that discretion will rise in environments where it has not been critical, while decreasing in other aspects of digitally supported knowledge work.

Dropping prices and proliferating alternatives promote greater choice. Jobs focused strongly on production efficiency may not permit flashy hardware or stylish software that is expensive, but as the cost comes down, resistance lessens. Even when uncertain about the productivity gains of employees accessing the Web, a free browser is more difficult to oppose. Similarly, ease of learning and use promote greater choice—there is less cost involved in adding or changing applications.

Another force acting to promote discretion is the spread of hands-on technology use into management. Managers are less likely to mandate that people use technologies that they themselves find undesirable. Forbus describes a seismic shift in attitude in the military, which had funded his past work on speech recognition: Recognition systems were attractive when officers acquired them for others to use, but when senior officers become hands-on users…

“Our military users… generally flatly refuse to use any system that requires speech recognition,” he wrote [13], and elaborated in a talk: “Over and over and over again, we were told ‘If we have to use speech, we will not take it. I don’t even want to waste my time talking to you if it requires speech...’ I have seen generals come out of using, trying to use one of the speech-enabled systems looking really whipped. One really sad puppy, he said ‘OK, what’s your system like, do I have to use speech?’ He looked at me plaintively. And when I said ‘No,’ his face lit up, and he got so happy.” [12] Forbus’s systems did incorporate speech recognition, but he had learned to deactivate it and conceal it from the very customers who once funded him to build it.

As systems are integrated across organizations, they must appeal to individuals who have a choice In something of a paradox, however, in environments where everyone uses a system, forces build to reduce discretion. Efficiency is usually enhanced if everyone uses the same software and adopts the same conventions. This process began to be evident a decade or more ago. Before PCs were networked, people used word processors for text entry and editing and printed the document for distribution. In that world, they could use any word processor they liked, and any convention. I could emphasize by ­underlining, you by italicizing, someone else by bolding. With networking, we can pass around documents digitally and co-author more easily. But we need to agree on a word processor to use, and there is more pressure to adopt the same conventions. Similarly, if most of the people in a group adopt an online calendar to help in scheduling meetings, holdouts come under peer pressure, perhaps gentle and perhaps not, to do the same [30].

Brian Shackel [33] noted this reversal in titling a section of his history of HCI ‘From systems design to interface usability and back again.’ Initially, designers considered an entire system and spent little time on the ‘knobs and dials’ that affected operators. With the microcomputer, operator and output user were the same person (who also provides the programs, albeit by selecting rather than writing them), and the focus shifted to the interface. Now, with intranets providing a much higher level of digital interaction and interdependency, it is possible and useful to again think at the system level, but with a far higher level of detail. In optimizing for systemic efficiency, individual discretion is (ideally willingly) curtailed. When there were few cars on the road, there were no driving tests or traffic regulations, users (drivers) had total discretion. As traffic picked up, discretion was curtailed, for the common good.

Although digitally mediated interaction promotes wider acceptance of tools and conventions, it also facilitates expression of preference. Workers who use technology at home will press for equally capable systems at work. Discretion is still exercised, but groups and organizations make the choices collectively, through processes that vary in inclusiveness and effectiveness.

Another force bringing these threads together is the fact that some situations by their nature curtail the range of choice. Some people with physical disadvantages are happy to take the time to train a recognition system that others would not. Product developers intent on expanding a market or providing services to the disadvantaged address the challenges of building systems for these users.


Conclusion


This look at HCI history began with several puzzles. How to combine the century-long history of human factors and ergonomics with the burst of visionary ideas and prototypes unleashed by the arrival of transistor-based computers? How to think about the staggering expenditures on recognition and language technologies that have little marketplace acceptance? Why work on GOMS and its progeny crested in the mid-80s. How do we explain the separate worlds of human-computer interaction reflected in CHI and INTERACT conferences on the one hand and NSF and DARPA on the other? What is the context of the growing emphasis on design in HCI? This paper sets out a framework for considering these issues.

It is noteworthy that the most prominent method of HCI and the behavioral sciences, the laboratory study, is not conducive to studying choice. In almost all cases, the choice of using a technology was made for participants in advance. HCI terminology focused on ‘novice users,’ ‘occasional users,’ and ‘expert users,’ in all cases, use was presupposed. Some exceptions are found [e.g., 15, 24], but in general other approaches have been resorted to, such as market research and ethnography, which have their own problems and challenges. Perhaps we can do better here.

Other important issues remain. What is or should be the scientific foundation of human computer interaction? Perception, cognition and computer science have played roles. Social psychology and organizational science, certainly. Anthropology has contributed methodologically. Developmental psychology?. Emotion? Economics? Ten to twenty years ago the roles of science and engineering in HCI, and the merits of hard vs. soft science contributions, were actively debated [2, 7, 20, 25]. Revisiting such discussions would merit another historical survey.

A useful step would be to look more closely at the nuts and bolts of design when various levels of discretion in use are the target. Which methods and goals remain the same, which change, what tradeoffs become more and less critical? In the meantime, I hope this leaves practitioners with another way to think about the people they design for and leaves researchers with a richer understanding of the field to which they contribute.


ACKNOWLEDGMENTS


I would like to thank Brian Shackel, Phil Barnard, Andrew Dillon, John Thomas, and Steve Poltrock for assistance, although they are not responsible for omissions or errors.

REFERENCES


  1. Baecker, R., Grudin, J., Buxton, W., & Greenberg, S. (1995). Readings in human-computer inter-action: Toward the year 2000. Morgan Kaufmann.

  2. Barnard, P. (1991). Bridging between basic theories and the artifacts of human-computer interaction. In J.M. Carroll (Ed.), Designing interaction: Psychology at the human-computer interface, 103-127. Cambridge University Press.

  3. Bennett, J.L. (1979). The Commercial impact of usability in interactive systems. In B. Shackel (Ed.), Man-computer communication, Infotech State-of-the-Art, Vol. 2. Pergamon-Infotech.

  4. Bush, V. (1945). As we may think. The Atlantic Monthly, Vol. 176, 101-108.

  5. Card, S.K., Moran, T.P., & Newell, A. (1980). Keystroke-level model for user performance time with interactive systems. Communications of the ACM, 23, 7, 396-410.

  6. Card, S.K. & Moran, T.P. (1986). User techno- logy: From pointing to pondering. Proc. conf. on history of personal workstations, 183-198. ACM.

  7. Carroll, J.M. & Campbell, R.L. (1986). Softening up hard science: Response to Newell and Card. Human-computer interaction, 2, 3, 227-249.

  8. Dyson, F. (1979). Disturbing the universe. Harper & Row.

  9. Engelbart, D. (1963). A conceptual framework for the augmentation of man’s intellect. In P. Howerton & D. Weeks (Eds.), Vistas in information handling, Vol. 1, 1-29. Spartan Books.

  10. Engelbart, D. & English, W. (1968). A research center for augmenting human intellect. AFIPS Conference Proceedings 33, 395-410.

  11. Fano, R. & Corbato, F. (1966). Time-sharing on computers. Scientific American 214(9), 129-140.

  12. Forbus, K. (2003). Sketching for knowledge capture. Lecture, May 2.

  13. Forbus, K. D., Usher, J., & Chapman, V. (2003). Qualitative spatial reasoning about sketch maps. Proc. Innovative Applications of AI.

  14. Greenbaum, J. (1979). In the name of efficiency. Temple University.

  15. Grudin, J. & MacLean, A. (1982). Adapting a psychophysical method to measure performance and preference tradeoffs in human-computer interaction. Proc. INTERACT’84, 338-342. North Holland.

  16. Kay, A. (1969). The reactive engine. Ph.D. Thesis, University of Utah.

  17. Kay, A. & Goldberg, A. (1977). Personal dynamic media. IEEE Computer 10(3), 31-42.

  18. Licklider, J. (1960). Man-computer symbiosis. IRE Transactions of Human Factors in Electronics HFE-1, 1, 4-11.

  19. Licklider, J. & Clark, W. (1962). On-line man-computer communication. AFIPS Conference Proceedings 21, 113-128.

  20. Long, J. (1989). Cognitive ergonomics and human-computer interaction. In J. Long & A. Whitefield (Eds.), Cognitive ergonomics and human-computer interaction, 4-34. Cambridge University Press.

  21. Martin, J. (1973). Design of man-computer dialogues. Prentice-Hall.

  22. Nelson, T. (1965). A file structure for the complex, the changing, and the indeterminate. Proc. ACM National Conference, 84-100.

  23. Nelson, T. (1973). A conceptual framework for man-machine everything. Proc. National Computer Conference, M21-M26.

  24. Newell, A., Arnott, J., Dye, R. & Cairns, A. (1991). A full-speed listening typewriter simula-tion. Int. J. Man-Machine Studies, 35, 2, 119-131.

  25. Newell, A. & Card, S.K. (1985). The prospects for psychological science in human-computer interaction. Human-computer interaction, 1, 3, 209-242.

  26. Norman, D.A. (1983). Design principles for human-computer interfaces. Proc. CHI’83, 1-10. ACM.

  27. Norman, D.A. (2003). Emotional design: Why we love (or hate) everyday things. Basic.

  28. NSF 93-2: Interactive Systems Program Description. January 13, 1993.

  29. NSF IIS COV Report, July 2003.

  30. Palen, L. & Grudin, J. (2002). Discretionary adoption of group support software: Lessons from calendar applications. In B.E. Munkvold, Implementing Collaboration Technologies in Industry, 159-180. Springer Verlag.

  31. Shackel, B. (1959). Ergonomics for a computer. Design, 120, 36-39.

  32. Shackel, B. (1962). Ergonomics in the design of a large digital computer console. Ergonomics, 5, 229-241.

  33. Shackel, B. (1997). Human-computer interaction: Whence and whither? JASIS, 48, 11, 970-986.

  34. Shiel, B.A. (1981). The psychological study of programming. ACM Computing Surveys, 13, 1, 101-120.

  35. Shneiderman, B. (1980). Software psychology: Human factors in computer and information systems. Winthrop.

  36. Smith, H.T. & Green, T. R. (1980). Human interaction with computers. Academic.

  37. Smith, S.L. & Mosier, J.N. (1986). Guidelines for designing user interface software. ESD-TR-86-278. Bedford, MA: MITRE.

  38. Sutherland, I. (1963). Sketchpad: A man-machine graphical communication system. AFIPS 23, 329-346.

  39. Taylor, F.W. (1911). The principles of scientific management. Harper.

  40. Thomas, J. (2003). Email sent October 3.

  41. Weinberg, G. (1971). The psychology of computer programming. Van Nostrand Reinhold.






Download 51.12 Kb.

Share with your friends:




The database is protected by copyright ©ininet.org 2024
send message

    Main page