Hci Definition of: hci



Download 97.61 Kb.
Date17.05.2017
Size97.61 Kb.
#18485
HCI

Definition of: HCI


HCI is neither just the study of humans nor just the study of technology. It is the bridge between the two. (HCI) is the study, planning and design of the interaction between people (users) and computers.
Explanation:
Human Computer Interaction Refers to the design and implementation of computer systems that people interact with. It includes desktop systems as well as embedded systems in all kinds of devices. HCI is the study of how people interact with computers and to what extent computers are or are not developed for successful interaction with human beings.

An important facet of HCI is the securing of user satisfaction human factors such as computer user satisfaction are relevant. HCI is also sometimes referred to as man–machine interaction (MMI) or computer–human interaction (CHI). Attention to human-machine interaction is important, because poorly designed human-machine interfaces can lead to many unexpected problems.

Goals:

A basic goal of HCI is to improve the interactions between users and computers by making computers more usable


Interface Problems
Since the human and computer do not recognize the same concepts (speak the

same language) interfaces cause problems.


Human Factors:
Physical environment

Health issues

Use of colour
Physical environment & health issues:
Unsatisfactory working conditions can at best lead to stress and dissatisfaction and at worst harm workers' health. Some factors to consider:
Physical position should be comfortable

Temperature should not be extreme

Lighting should be low-glare & sufficient

Noise should not be excessive; high levels hamper perception

Time
Colour Vision
The eye consists of millions of photo receptors sensitive to light

Two types of photo receptors


Rods
Not sensitive to colour

High density at periphery

Highly sensitive

Low resolution


Cones
Sensitive to colour different cones for red, green and blue light

High de nsity in centre

Less sensitive can tolerate bright light
How colours generated??
CRT screen generate colour by addition of Red/Green/Blue.
Use of Colour
Color and accessibility are indelibly linked to one another; bad color combinations create bad user environments. The bright colors can show users that they are doing the right thing or the wrong thing. Color can be used as a grouping method or to draw attention to certain aspects of the system.




Some common issues
Some common issues with color use are too many colors, complementary colors placed too close together, excessive saturation, inadequate contrast and inadequate attention to color impairment. Color can be used to create images that appear 3-D
Colour is a powerful cue, but it is easy to misuse.

It should not be applied just because it is available.


Colour Principles & Guidelines

Have some other redundant cue

Optimal combinations are known

Include a bright colour in the foreground

Best background {black

Worst background {brown or green

Use colour to group/highlight information

Use colour to support search tasks

Allow customization

Ensure colours differ in lightness (aids colorblind users)

Limit colour to eight (8) distinct colours; four (4) preferred

Avoid saturated blues for text

Choose foreground and background colour with care

colours are hard to distinguish when objects are small, far apart, or close









COLOUR:

Materials should be designed in shades of gray, black and white first, with color added later in a fashion which adds to instructional effectiveness. Here are the reasons why this is good advice:



  • Many people suffer from some type of color deficiency ranging from weakness in certain colors, mainly red and green, to full loss of color (it is estimated that 8% of the population experience some type of color deficiency).

  • Aging also affects the perception of colors.

  • Users may be accessing your design via monochrome monitors - if important distinctions are shown by varying colors, this information will not be available to these users.

Guidelines Based on the Physiological Properties of Color:


Murch discussed screen color use based on the physiological properties of the eye, discussing how the number and distribution of rods and cones in the eye affected the perception of line and color.

Uses of affective and structural colors:


Schaeffer & Bateman discuss color in terms of its affective role and its structural uses:

The affective role of color concerns how color can be used to motivate or generate an emotional response.

Structural uses for color involve assigning functional meanings to various colors - menu items in one color, instructions in another color and error messages in a third color so that color can help the user to differentiate between the functions of various text messages. Color can also be used to attract the user's attention to convey messages which must be addressed quickly.

Pett and Wilson list the following suggested uses of color:



  • Use color to add reality.

  • Use color to discriminate between elements of a visual.

  • Use colors to code and link logically related elements.

  • Use highly saturated colors for materials intended for young children.

  • Consider commonly accepted color meanings such as red, yellow and green where red means stop, green means go, etc.

Here's an example of what not to do when picking background colors and text colors. Can you read the blue text on the blue background?

The bottom line on color:


  • Design conservatively, possibly starting with black and white.

  • Do not make color the only way to discriminate between choices.

  • Use it appropriately to serve the purposes of clarity and functionality.

The GUI´s Impact on Society:

A graphical user interface (GUI) is a human-computer interface (i.e., a way for humans to interact with computers) that uses windows, icons and menus and which can be manipulated by a mouse. The way the GUI as we know it today became a standard in Human-Computer Interaction, and has influenced the work and communication of a generation of computer literates. It has become a transparent layer any PC-user relies on.

The GUI also helped to develop a whole new industry for publishers and designers within Desktop Publishing and partly wiping out the print and typesetting industry.
Opposite the fast developing hardware market, the GUI until today did not evolve or change very much, considering what would be possible. Its paradigms like the desktop metaphor, drop-down menus, overlapping windows etc. still are the same.

Common Features:


Users get used to an operating system. Commands are similar, keys and click perform the same tasks. Learning is achieved faster throughout different applications. Consistency in screen layout; menus, dialogue boxes, and error messages. Customisation is similar in applications. On-line help is offered in a similar way. In business users are much more efficient in their work if using common facilities.

Control:

The GUI controls the hardware, i.e. the use of memory, storage, printers. It can influence how the user interacts with the program they are working on. Being able to Open, Save, Print, and use Help in a word processor means the user can do it in a spreadsheet, a database etc without any further training. This makes it much easier to transfer skills from one application to another.


Designing for HCI

Explaining the impact of computers on design as an activity, Hoffman, Roesler and Moon (2004) offer the following:

The older design credo “form follows function” has become obsolete. Artifacts now might not look like what they do, in part because their inner makings have shifted from a mechanical base to an electronic one. Much of the semantic coding in artifacts gets lost to the human who looks at the artifact, and designing meaningful artifacts for human–machine interaction becomes necessary to channel the vast growth in the belief that intelligent systems would provide means for collaborative technology. At the same time, the computer has entered the design office as a tool that has challenged traditional design expertise and extended the quest in defining what the activity of design entails. (Lore is that most design work is now done using computers.) Designers face challenges in designing new technologies, and they have to design with these new technologies. And they have to do so at an accelerated pace, using designer-centered technologies that require kludges, work-around, and make-work.

A typical HCI related design tasks/elements



  1. User needs analysis

    • Define the analysis framework and methodology

    • Define contents and concepts

    • Acquire and categorize mental representations

  2. Define the interface "language"

  3. Prototype creation

  4. Usability and cognitive ergonomics testing

Interaction design

In a computer-based environment an interaction can be defined as “the representations and operations of an application system by considering what representations the user needs to interact with, through what operations.” (Yamamoto and Nakakoji, 2005)

Interaction design principles

Vita Hinze-Hoare formulated four fundamental design principles. Her theory is based on an analysis of Dix (2003), Schneiderman (1998) and Preece (1994), discussion with peers and a user survey:



  • Learnability/Familiarity: for example, reduce short term memory load, ensure ease of understanding and guessability, make operations visible, use appropriate metaphors.

  • Ergonomics/Human Factors: for exemple, allow for flexible input (like menus, shortcuts, panels), multiple communication, design for user growth

  • Consistency/Standards: for example, likeness in behavior, consistent and clear user interface elements

  • Feedback/Robustness: give appropriate quantity of response, offer informative feedback, let the user recover from errors or dead-ends, insure stability, task completeness and adequacy, respond in time.

Ladly outlines some guidelines in designing interactions:

  • Visibility - knowing the stat of an object and the choices available

  • Feedback - timely, in an appropriate mode (aural, visual, etc.), yet not distracting from task

  • Affordance - use object whose actual properties are in accordance with its perceived properties (e.g. an icon depicting a switch should turn something on or off)

  • Mapping - make use of the relationship between objects and their environment (e.g. placing a menu bar at the top of an application window)

  • Constraints - limit the possible interactions physically, semantically (context-related meaning), logically, or culturally (learned conventions)

  • Habituation - the use of the system should become internalized to the point that the user only thinks of the task, not the system

A cognitive interactive interface should invoke and respond to only one action from the user. (Ladly, 2004)

HCI design approaches

One view of design is that it is an activity that aims to solve contextual problems systematically (Hoffman, Roesler & Moon, 2004)


  • Top-down or hierarchical problem solving - working from the functional level to the specific working out issues problems that arise

  • Design by reuse - use of previous designs that are based on similar situations

  • Design problem evolution - recognition and relaxation of assumptions thus engaging in a redefinition of the problem in cycles that involve planning, translating and revising in order to optimize a system so that it can satisfy diverging and contradictory requirements

  • Design by deliberative recognition-priming - use of previous conceptual knowledge and experience to recognize useful patterns to by-pass hierarchical processes

  • Design by serendipitous recognition-priming - ideas that arise from opportunistic comparisons and analogies not necessarily directly related to the design problem.

  • Design by collaboration and confrontation - team-based design based on collaboration and confrontation activities.

Story-based design

Tom Erickson (1995) outlines some ways in which storytelling can be used as a tool for designing human-computer interactions. Stories reveal users' experiences, desires, fears and practices that can in turn drive effective user-centered design. He points out that stories, in contrast to scenarios, involve real people in particular situations and consequently involve unique histories, motivations and personalities.



  • Story gathering - gathering users' stories on the users' domain (a culturally, socially and physically situated environment) thereby collecting and building a shared language, referents and questions and issues to be addressed.

  • story making - building 'scenario-like' stories that capture emerging common concepts and details from users' stories

  • Involving users - using stories with users to elicit dialog and discussions that bring essential ideas and problems to light that should be considered in the design.

  • transferring design knowledge - being highly memorable and still susceptible to the uncertainty entailed in the particular being applied to the whole, “ stories become important as mechanisms for communicating with the organization by upport design transfer”, by “ capturing both action and motivation, both the what and the why of the design” (Erickson, 1995)

Personas in interaction design

Design of an interaction sets the conditions in which a conversation between a user and a system will take place. The system needs to speak and respond to the user. To envision more effectively how such a conversation may proceed, interaction designers determine user personas. Personas are defined models of intended and potential user types. These models can be defined through ethnographic research practices such as observation, interviews or direct user-testing with sample target users. Personas are widely used in user-centered design approaches.


HCI Design Approaches


Eberts (1994) describes four Human-Computer Interaction (HCI) design approaches that may be applied to user interface designs to develop user-friendly, efficient, and intuitive user experiences for humans. These four approaches include the Anthropomorphic Approach, the Cognitive Approach, the Predictive Modeling Approach, and the Empirical Approach. One or more of these approaches may be used in a single user interface design.

Anthropomorphic Approach


The anthropomorphic approach to human-computer interaction involves designing a user interface to possess human-like qualities. For instance, an interface may be designed to communicate with users in a human-to-human manner, as if the computer empathizes with the user. Interface error messaging in often written this way, such as, “We’re sorry, but that page cannot be found.” Another example is the use of avatars in computer-based automation, as can be found in automated telephony systems. For example, when a voice-response system cannot understand what the user has spoken, after several attempts it may reply in an apologetic tone, “I’m sorry, I can’t understand you.”

Affordances


Human affordances are perceivable potential actions that a person can do with an object. In terms of HCI, icons, folders, and buttons afford mouse-clicking, scrollbars afford sliding a button to view information off-screen, and drop-down menus show the user a list of options from which to choose.  Similarly, pleasant sounds are used to indicate when a task has completed, signaling that the user may continue with the next step in a process. Examples of this are notifications of calendar events, new emails, and the completion of a file transfer.

Constraints


Constraints complement affordances by indicating the limitations of user actions. A grayed-out menu option and an unpleasant sound (sometimes followed by an error message) indicate that the user cannot carry out a particular action. Affordances and constraints can be designed to non-verbally guide user behaviors through an interface and prevent user errors in a complex interface.

Cognitive Approach


The cognitive approach to human-computer interaction considers the abilities of the human brain and sensory-perception in order to develop a user interface that will support the end user.

Metaphoric Design


Using metaphors can be an effective way to communicate an abstract concept or procedure to users, as long as the metaphor is used accurately. Computers use a “desktop” metaphor to represent data as document files, folders, and applications. Metaphors rely on a user’s familiarity with another concept, as well as human affordances, to help users understand the actions they can perform with their data based on the form it takes. For instance, a user can move a file or folder into the “trashcan” to delete it.

A benefit of using metaphors in design is that users who can relate to the metaphor are able to learn to use a new system very quickly. A potential problem can ensue, however, when users expect a metaphor to be fully represented in a design, and in reality, only part of the metaphor has been implemented. For example, Macintosh computers use the icon of a trashcan on the desktop, while PCs have a recycle bin. The recycle bin does not actually “recycle” the data; instead it behaves like the Macintosh trash can and is used to permanently delete files. On the other hand, in order to eject a mounted disc on a Macintosh, the user must drag the icon of a CD-ROM to the trashcan. When this was first introduced, it was confusing to users because they feared losing all the data on their CD-ROM. In more recent versions of the Mac OS, the trashcan icon turns into an eject symbol when the user drags a mounted disc to the trashcan. This does not make the metaphor flawless, but it does prevent some user confusion when they are ejecting the mounted disc.


Attention and Workload Models


When designing an interface to provide good usability, it is important to consider the user’s attention span, which may be based on the environment of use, and the perceived mental workload involved in completing a task. Typically, users can focus well on one-task-at-a-time. For example, when designing a web-based form to collect information from a user, it is best to contextually collect information separately from other information. The form may be divided into “Contact Information” and “Billing Information”, rather than mixing the two and confusing users.

By “chunking” this data into individual sections or even separate pages when there is a lot of information being collected, the perceived workload is also reduced. If all the data were collected on a single form that makes the user scroll the page to complete, the user may become overwhelmed by the amount of work that needs to be done to complete the form, and he may abandon the website. Workload can be measured by the amount of information being communicated to each sensory system (visual, auditory, etc.) at a given moment. Some websites incorporate Adobe Flash in an attempt to impress the user. If a Flash presentation does not directly support a user’s task, the user’s attention may become distracted by too much auditory and visual information.



Overloading the user’s memory is another common problem on websites.  For example, when there are too many options to choose from, a user may feel overwhelmed by the decision they have to make, become frustrated, and leave the website without completing their goal.

Human Information Processing Model


Human Information Processing (HIP) Theory describes the flow of information from the world, into the human mind, and back into the world. When a human pays attention to something, the information first gets encoded based on the sensory system that channeled the information (visual, auditory, haptic, etc.). Next, the information moves into Working Memory, formerly known as Short-Term memory. Working Memory can hold a limited amount of information for up to approximately 30 seconds. Repeating or rehearsing information may increase this duration. After Working Memory, the information may go into Long-Term Memory or simply be forgotten. Long-Term Memory is believed to be unlimited, relatively permanent memory storage. After information has been stored in long-term memory, humans can retrieve that information via recall or recognition. The accuracy of information recall is based on the environmental conditions and the way that information was initially encoded by the senses. If a human is in a similar sensory experience at the time of memory recall as he was during the encoding of a prior experience, his recall of that experience will be more accurate and complete.

Empirical Approach


The empirical approach to HCI is useful for examining and comparing the usability of multiple conceptual designs. This testing may be done during pre-production by counterbalancing design concepts and conducting usability testing on each design concept. Often, users will appreciate specific elements of each design concept, which may lead to the development of a composite conceptual design to test.

Human Task Performance Measures


In addition to a qualitative assessment of user preferences for a conceptual design, measuring users’ task performance is important for determining how intuitive and user-friendly a web page is. A researcher who is familiar with the tasks the web page has been designed to support will develop a set of test tasks that relate to the task goals associated with the page. Users may be given one or more conceptual designs to test in a lab setting to determine which is more user-friendly and intuitive. User performance can be assessed absolutely, i.e., the user accomplishes or fails to complete a task, as well as relatively, based on pre-established criteria. For instance, it may have been determined that users should be able to register for an account within five minutes, and with no more than two errors. If the researcher observes otherwise, and even if the user finally completes the task (perhaps after fifteen minutes and five errors), the time and number of errors may be compared to the desired standard as well as to the alternate conceptual design for the web page.

A/B Testing


If two of three design concepts were rated highly during user testing, it may be advantageous to conduct an A/B Test during post-production. One way to do this is to set up a Google Analytics account, which allows a researcher to set up multiple variations of a web page to test. When a user visits the website, Google will display one variation of the web page according to the end user’s IP address. As the user navigates the website, Google tracks the user’s clicks to see if one version of the web page produces more sales than another version. Other “conversion” goals may be tracked as well, such as registering a user account or signing up for a newsletter.

Predictive Modeling Approach


GOMS is a method for examining the individual components of a user experience in terms of the time it takes a user to most efficiently completes a goal. GOMS is an acronym that stands for Goals, Operators, Methods, and Selection Rules (Card, Moran, & Newell, 1983)[2]. Goals are defined as what the user desires to accomplish on the website. Operators are the atomic-level actions that the user performs to reach a goal, such as motor actions, perceptions, and cognitive processes. Methods are procedures that include a series of operators and sub-goals that the user employs to accomplish a goal. Selection Rules refer to a user’s personal decision about which method will work best in a particular situation in order to reach a goal.

The GOMS model is based on human information processing theory, and certain measurements of human performance are used to calculate the time it takes to complete a goal. For example, the average time it takes a human to visually fixate on a web page, move eye fixation to another part of the web page, cognitively process information, and make a decision of what to do next can be measured in milliseconds. The times it takes for each of these operators can be added up to produce the total time for a particular method. Multiple methods can be compared based on the total time to complete a task in order to determine which the most efficient method for accomplishing the task is.



HCI in children according to research:
“Children are increasingly using computer technologies as reflected in reports of computer use in schools in the United States. Given the greater exposure of children to these technologies, it is imperative that they be designed taking into account children's abilities, interests, and developmental needs. It provides and overview of current research trends in the field of interaction design and children and identifies challenges for future research. To understand children's developmental needs it is important to be aware of the factors that affect children's intellectual development. This analyzes the relevance of constructivist, socio-cultural, and other modern theories with respect to the design of technologies for children. It also examines the significance of research on children's development in terms of perception, memory, symbolic representation, problem solving, and language. Since interacting with technologies most often involves children's hands this survey also reviews literature on children's fine motor development including manipulation and reaching movements. Just as it is important to know how to aid children's development it is also crucial to avoid harming development. This survey summarizes research on how technologies can negatively affect children's physical, intellectual, social, emotional, and moral development. Following is a review of design methodologies for children's technologies organized based on the roles children may play during the design process including a description of cooperative inquiry and informant design methods. This is followed by a review of design principles obtained through experiences in developing technologies for children as well as research studies. It includes design principles related to visual design (e.g., icons, visual complexity), interaction styles (e.g., direct manipulation, menus), and the use of input devices (e.g., pointing, dragging, using mouse buttons). The latter half of this survey summarizes research trends in the field of interaction design and children, grouping research efforts in the following areas: supporting creativity and problem solving, supporting collaboration and communication, accessing, gathering and exploring content, learning from simulations, supporting children with special needs, interacting with intelligent characters, supporting healthy lifestyles, learning skills, mobile, tangible, and ubiquitous computing, and designing and evaluating technologies.

THE PAST of hci:


In 1970, Alan Kay arrived at the just-formed Xerox PARC inspired by his vision of a laptop computer for ordinary users. Back then, the personal computer was a dream shared by a few wild souls. There were a handful of minicomputers, but those machines were for engineers and scientists, of course. Kay and other PARC engineers started developing computers with the extraordinary idea of giving them to ordinary people. Kay was also working on Smalltalk leading to Smalltalk-72 soon after. His laptop-style Dynabook was infeasible in the 1970s, but the group did produce the Xerox Alto desktop computer in 1973. The Alto had a mouse, Ethernet, and an overlapping window display. It was a technical marvel, but not necessarily easy to use. There was mouse functionality, but it was mostly a “text-oriented” machine. While the Alto was developed for ordinary users, it was not clear at the time what that market really looked like Most Altos appear to have been sold or given away to engineering labs.

In 1976 Don Massaro from Xerox’s office products division pushed ahead a personal computer concept for office environments called the Star. A separate development division was created for the Star and headed by David Liddle. It worked closely with PARC, but was not part of PARC. The Star is rightfully cited as the first “modern” WIMP computer. It’s impossible to look at screenshots, or to actually use a machine without being struck by how good it is compared with what came after. Liddle quipped that Star was “a huge improvement over its successors.” It’s not just its execution of the WIMP interface and desktop metaphor, but its remarkably clean and consistent “object-orientedness”—right-button menus, controls, and embeddable objects today are a rather clumsy echo of Star’s design.

The most remarkable aspect of Star, however, is the process its designers used to develop it, which has been widely imitated and which made good interface design a reproducible process. Liddle’s first step was to review existing development processes with the help of PARC researchers and produce a best-practices document that Star would follow. It included task analysis, scenario development, rapid prototyping, and users’ conceptual models. Much of the design evolution happened before any code was written. Code development itself consisted of many small steps with frequent user testing. It was a textbook example (and it’s in Terry Winograd’s 1996 landmark textbook, Bringing Design to Software) of user-centered design.

Even the Alto had followed a much more classical design process. It was enough to put the Alto in the right ballpark, but that machine feels like it’s from a completely different era. The Star knew what it was trying to be, and included a good suite of office software. For reasons that almost surely had nothing to do with its interface or application design, it failed in the marketplace. Its close reincarnation in the Macintosh was a huge success. good mass-market design requires a user-centered design process. And it often involves real social scientists or usability experts, as well as engineers.

The Star design was so good that HCI researchers are regularly the brunt of “Star backlash.” It goes something like this: “HCI hasn’t produced major innovations in the last 20 years; the WIMP interface today is almost identical to what it was in the 1980s.” In many of the “technical arts,” that would be a compliment. In computing, we have 20-year-old artifacts in museums and call them “dinosaurs.” But it’s wrong to apply that thinking to HCI. Humans are the key element in human-computer interaction. As a species, people don’t evolve that fast, and we often take years to learn things well. We have interface conventions in automobiles as well It’s just not good to “innovate” with those. For the time being, we can’t “reflash” people with an upgrade, so let’s not go there. The amazing thing is when you execute the human-centered design process well, you get a design that endures for decades. Multiple generations can learn it and become computer-empowered without worrying about losing that skill later.

For the same reason, when you design something new, it’s much better to copy every well-known convention you can find than to make up a new one. As Picasso said, “Good artists borrow from the work of others, great artists steal.” So good HCI design is evolutionary rather than revolutionary.

Finally, there is an overall to take away from these two systems. The modern popular computer required two kinds of innovation: free-wheeling, vision-driven engineering, often technology-centered but ideally informed by high-level principles of human behavior (Alto); and careful, context-driven, human-centered, design evolution (Star). That’s a critical point. You need truly creative design and engineering to conceive and execute a radically new idea, but innovation also requires validation. In HCI, validation means that it works well with real users. For that to happen, human-centered design evolution must happen. Innovation in the product is a nice virtue, but it’s an option in terms of marketability. Usability is not.

THE PRESENT of hci:


It sounds like everything is apples so far. User-centered design works well, we have good office information systems, HCI is a solid discipline (if unexciting because we still like those breakthroughs every few years). So why write an article on the future of HCI, and more to the point, why should you read it? The beef is that IT is not just about office work any more. It’s going everywhere. Because of that, we’re due for another revolution (in fact, probably several) in HCI over the next few years.

Intel recently reorganized itself to align with the major market sectors for Intel PCs today. Those sectors are office, home, medical, and mobile. That’s a lot of PCs in new places, and they’re almost all running a Star-style WIMP interface.

Global cell phone sales are now running at 800 million units per year, about four times the annual sales of PCs (or television sets). Recent years have seen 100 percent annual growth in overall phone sales, and close to 200 percent for smart phones. Sales are nearing saturation in developed countries, but still accelerating in the Third World, which dominates now. Smart-phone sales are about 15 percent of the market now (around 100 million units), but with their faster growth should outnumber PCs by 2008. Smart phones today are about as powerful as a midrange PC from eight years ago, but they waste the latter in media performance. Although only a tiny amount of smart-phone software is around now, it is one of the fastest-growing sectors of the industry.

A small army of gadgets are fighting for dominance in your living room. If any has a state-of-the-art cable box (which will also record 40 hours of hi-def TV), one know it has all the hardware to connect to any conceivable media device. It has an always-on Internet connection and automatic software upgrades that give it a powerful marketing edge. You’ll always get cool new services whether you ask for them or not. Microsoft and Apple have PC-like entries for this market, some high-end TVs include all this in the box, and then of course there are game boxes that pack most of those functions along with super-high-end graphics. I’ve made myself a guinea pig for this stuff, but it’s really a pain to use. The wireless keyboards, cornucopia of remote controls, on-screen letter-of-the-alphabet menus it’s like those early “horseless carriage” steam automobiles that had reins.

THE FUTURE of hci:

The cell phone has a tiny screen with tiny awkward buttons and no mouse. From start to finish, it was designed for speech. The microphone and speaker are small but highly evolved, and the mic placement in its normal position is optimal for speech recognition. We’ll get to speech interfaces shortly. If it’s a smart phone, it probably also has a camera and a Bluetooth radio. It has some kind of position information, ranging from coarse cell tower to highly accurate assisted satellite GPS.

The other important piece of future interfaces should be “perception.” The simplest example is speech recognition, or more accurately, speech-based interfaces. Another example is computer vision. Smart phones are excellent speech platforms, as already noted, but most also have cameras and a respectable amount of CPU power, especially in their digital signal processors. They are more than capable of computer vision using either still images or video from their cameras. A simple example is barcode recognition, which is already available on some camera phones (both 2D and 1D barcode readers have appeared on commercial phones). OCR (optical character recognition) for business-card recognition is also available commercially. Many Cellphones now support speech input for speed dialing or selecting a name from the phone book. Large vocabulary interfaces for dictation appeared last year. Full continuous large-vocabulary recognition is on the way. The latter especially opens up whole new application possibilities for smart phones and may do much to break the usability barrier for these devices. Most of this technology was developed by Voice Signal.

James Crowley, who directs the GRAVIR (Graphics, Vision, and Robotics) laboratory at INRIA (French National Institute for Research in Computer Science and Control) Rhône-Alpes in France, is a leader in this area. A major challenge in high-level interpretation of human actions is context. Crowley and his colleagues have tackled this problem head-on by developing a rich model of context considering “situations” and “scenarios.” Gaetano Borriello, computer science professor at the University of Washington, leads us through some field tests of the Labscape system, which is intended as an efficient but unobtrusive assistant for cell biologists. In this setting, the users’ high-level activity is well the system has to use available clues from sensing to figure out where the user is and what resources are needed. In our final article, Jim Christensen and colleagues from IBM’s Thomas J. Watson Research Lab take a different approach to using context information. Whereas successful automatic context-aware systems are rare at this time, Christensen et al. argue for human interpretation of context information. They describe two systems that exemplify this approach: Grapevine, a system that mediates human-to-human communication to minimize inappropriate interruptions; and Rendezvous, a VoIP conference-calling solution that uses contextual information from corporate resources to enhance the user experience of audio conferencing. They also discuss some cogent issues related to user privacy in context-aware systems. By 2020 the terms "interface" and "user" will be obsolete as computers merge ever closer with humans.It predicts fundamental changes in the field of so-called Human-Computer Interaction (HCI).

By 2020 humans will increasingly interrogate machines. In turn computers will be able to anticipate what we want from them, which will require new rules about our relationship with machines.

Table map

Human-Computer Interaction in the year 2020, looks at how the development of technologies over the next decade can better reflect human values.



"It is about how we anticipate the uses of technology rather than being reactive. Currently the human is not considered part of the process," said Bill Buxton, from Microsoft Research.

At Goldsmiths College, Professor Bill Gaver and his team have developed a Drift table, a piece of furniture which allows people to view aerial photography of their local neighbourhood and beyond.

Other prototype technologies aimed at putting human needs at the centre of the equation include the Whereabouts Clock.

Smart devices

The keyboard, mouse and monitor will increasingly be replaced by more intuitive forms of interaction and display, including tablet computers, speech recognition systems and fingertip-operated surfaces. Boundaries between humans and computers will become blurred over the next decade as devices are embedded in objects, our clothing or, in the case of medical monitoring, in our bodies.

Although paper will still be a reality in 2020, digital paper will also flourish allowing us to create, for example, social network magazines that update in real time.

Digital storage of even more aspects of our lives, from mobile phone calls to CCTV footage, could be a reality by 2020 and, in combination with an omnipresent network will mean privacy will be a key focus of the HCI community.


Conclusion:

In conclusion, in the process of describing the current state of HCI and things to come this website has shown that HCI has come a long way which promises to allow users to interact with computers in much the same way that they interact with the physical world. HCI design principles are formulated in order to help programmers in the design of user-friendly programs. Most of these principles seem to be developed with regard to the complexity of a computer system and the limitations of humans.


HCI is still a subjective and changing field.  The degree to which a goal or criteria is applicable changes from person to person and program to program.  However, the goals presented in this paper were meant to be broadly applicable so that they would encompass most interfaces in existence today.  Also, HCI is definitely a changing field and it will continue to change as computers and their uses change.  It is an ever evolving field but it is always true to its one underlying aim to make the computer more friendly and easy to use for the user.

(THANKS)


HCI { Human Computer Interaction }


Download 97.61 Kb.

Share with your friends:




The database is protected by copyright ©ininet.org 2024
send message

    Main page