Ubiquitous Computing is considered to be the next major computing paradigm. Historically there have been two other major models for computing, main frame computing and today’s current model, personal computing. Although the PC was a large improvement over preceding mainframe computer systems of the 70’s and 80’s, this is not to say it is an ideal system. This paper will explain why ubiquitous computing is beneficial, and possibly even mandatory for the computer industry to move into. It will then follow with a brief history of ubiquitous computing and try to pin down where ubiquitous computing currently is. A shift from personal computing to ubiquitous computing will create new problems in industry and thus new job markets and research opportunities. Ubiquitous Computing is still in its infancy; however, more and more of the requirements for ubiquitous computing are coming to fruition. Since ubiquitous computing is user centric, people knowledgeable in Human Computer Interaction will be particularly valued in this area.
Ubiquitous Computing, Calm Computing, Human Computer Interaction, Moore’s Law,
By definition, ubiquitous is the act of being or seeming to be everywhere at the same time. When applied to computing, this definition is still valid; however, its meaning is slightly altered. The key concept behind ubiquitous computing is the computer becomes an omnipresent element of our physical environment, but it is also important that the computer does not seem intrusive. Ubiquitous computers should play an invisible role in peoples' daily lives and allow people to concentrate solely on the task at hand rather than on how to use a computer to do the task at hand.
Ubiquitous computing is considered the complete opposite of virtual reality. VR attempts to create a completely computer generated world for people to act in; ubiquitous computing creates a world augmented by devices that people act upon . Both technologies have similar goals because both technologies attempt to create a scenario where the user interacts naturally with the computing environment; however, ubiquitous computing is a more natural and more practical way to accomplish this goal. The cost to implement ubiquitous computing is not great and is feasible with current technology; however, the cost to implement VR to a level where a user’s presence in the virtual environment seems realistic is extremely costly if not unattainable. Thus, ubiquitous computing will likely be a cost effective and much more common.
Ubiquitous computing has also been termed calm computing and is considered to be the third wave of computing. The paradigm is fairly radical; however, it is not as radical as VR and has more potential to be realistically implemented on a large scale in the near future. Elements of mobile, pervasive, and distributive computing are currently encroaching upon some principles of ubiquitous computing. These are hotbeds of research and in reality the lines between these disciplines have started to blur.
If this ‘third wave’ of computing is so different, potentially better, and more feasible, why has it not been implemented earlier? Why was it not the designated model for computing in the first place? The answer to these questions can be pinned directly upon the evolution of the computer itself, which has been reliant on research and innovation but also on economics and the basis for existing markets.
An Economic Justification for Ubiquitous Computing
The development of the personal computer can be linked to Moore’s law, which has proved correct and given stability and profitability to the processor market since 1965. It can be seen as driving principle behind why computer companies, particularly processor companies, can continue to manufacture and market new chips, which directly corresponds to new computer systems. As long as every 18 months a new line of faster more powerful computers are released, the market will continue to expand. However, if an increase in computing power stops, either because the need no longer exists or because of physical manufacturing limitations, one must question what becomes of the industry.
Moore’s law has received a lot of attention in the press over time; however, a closer look at Moore's law and Moore's original paper shows that his statements have been often generalized . Moore never actually stated, “Computing power will double every year." Rather he stated:
"The number of transistors per chip that yields the minimum cost per transistor has increased at a rate of roughly a factor of two per year."
Although Moore's real statement is arguably analogous with the layman version, it is not the same, and this is an important distinction to make, because Moore's law can also be applied in support of ubiquitous computing. Moore’s basic premise that transistor density goes up and cost goes down is also beneficial to ubiquitous environments, because it translates into cost efficient chip production and actually supports lowering power voltage as well :
“Recognizing the new requirements of ubiquitous computing, a number of people have begun work in using additional chip area to reduce power rather than to increase performance [Lyon 93]. One key approach is to reduce the clocking frequency of their chips by increasing pipelining or parallelism. Then, by running the chips at reduced voltage, the effect is a net reduction in power, because power falls off as the square of the voltage while only about twice the area is needed to run at half the clock speed.”
Weiser, the founder of ubiquitous computing eloquently points out.
Moore’s originally conjecture is almost ironic, because he may have been thinking about its implication on ubiquitous or pervasive computing scenarios rather than on personal computing before either concept really existed. However, Moore ended up going on to becoming the CEO of Intel, which obviously had some of the greatest stake in the personal computer market.
The following cartoon was included in Moore's original paper. It portrays a salesman who is selling 'handy home computers' as if they were cosmetics. This cartoon seems somewhat similar to the current notion of ubiquitous computing, at least its omnipresent, commonplace aspect. This cartoon is quite futuristic considering it was published in 1965, but then again the personal computer didn’t exist in 1965 either. It specifically points out an ideal that chip production would become a massive and cheap venture, which would affect all people.
Cartoon from Moore’s Original Paper suggesting the future of Computing
Although computing speed has been consistently doubling every 18 months for the past 40 years, it has been predicted that within the next 2 decades the trend will not continue . This means that Moore’s law as applicable to personal computing may start to lose its validity. However, Moore’s law as applicable to ubiquitous computing may not. Eventually processor development will hit a wall where it will be very difficult to increase the number of transistors per chip and increase clocking speeds with out running into power density issues . However utilizing the extra space to do what Weiser proposed in the previous quotation may still be possible.
In addition to the former possible physical limitation on computing speed, there are also some other concepts and ideas that oppose Moore’s law.
The construction of monolithic software systems is not currently going so well. There is “no silver bullet”  which will provide a leap in the efficiency or way we produce software. Writh’s law also points this out. As a result, there isn’t anything that really drives the need for faster desktops or personal computer systems. It seems to a certain extent the size of the programs and their complexity has become some what capped. So once again, where can there be expansion? A new computing paradigm would be ideal.
Other computer components have not evolved as quickly as processors either. Hard drives and ram are good examples. This point brings into question whether or not faster computing speeds are really as necessary today as they once were. Possibly the computers of the next wave will put power consumption on a podium rather than processing speed. If this is the case, computers could become much more mobile and omnipresent than before.
A small example of this trend can already be seen with the increase in flat panel monitors in business environments. Despite higher cost flat panels consume much less energy than CRTs. LCD will use an average 30 watts compared to 120 watts for the CRT. Is this an indication of more energy conscious and conservative era?
A Humanistic Justification for Ubiquitous Computing
Looking at the current scenario from a human standpoint, possibly an increase in speed is not what is important any longer. Is it more likely that people want constant connectivity and superior mobility than a faster computer? Or maybe people desire something more abstract. Maybe it has to do ease of use and transparency. The success of mobile computing might be a piece in the puzzle, which could act as a stepping stone to a truly ubiquitous computing paradigm.
People who drive the computer market have shown the importance of user centric design, and indicate the desire for a more natural and flexible computing paradigm. This was the main goal at PARC labs when the ubiquitous computing program was first started there. It was actually a combination of anthropological studies and computer research :
“The program was at first envisioned only as a radical answer to what was wrong with the personal computer: too complex and hard to use; too demanding of attention; too isolating from other people and activities; and too dominating as it colonized our desktops and our lives. We wanted to put computing back in its place, to reposition it into the environmental background, to concentrate on human-to-human interfaces and less on human-to-computer ones.”
Thus, things seem to be falling into place for the next wave of computing. From an economic perspective to a social perspective a new paradigm is ready to come forth and replace the old. The early stages of ubiquitous computing’s infancy are already developing, and as it gains ground, its impact on the way people view computing could be drastically altered.
This is not a prediction of the extinction of the personal computer; obviously the personal computer will always be useful for certain things just as mainframe computers are still useful for certain tasks. Ubiquitous computing is like an added layer on top of personal computing, which is similar to the way personal computer was an added layer on top of the main frame terminal concept. Therefore, it is logical that the future of ubiquitous computing may become a new area of rapid growth and expansion. The graph below is an indicator of this possible trend.
A History of Ubiquitous Computing
The origins of ubiquitous computing are officially set to 1988 at Xerox with PARC labs. The person who is given credit for coining the term and solidifying its presence is Mark Weiser. The program was a combination of Weiser’s research in computing coupled with Lucy Suchman’s anthropological research, which observed the way people really used technology, that started interest in the program. These observations led to research that was less focused on improving the computer itself; rather, it was focused on improving how the computer functioned within the framework of peoples’ daily lives. The concept of the project was summarized with the catch phrase, “from atoms to culture.” 
Marc Weiser worked on various projects at Parc Labs between 1988 and 1994, with most of the papers concerning ubiquitous computing published around the end of his stay there. He has also acted as a lecturer promoting his concept throughout academia and speaking or even arguing about the future of computing at places like MIT’s Media Lab
The ubiquitous devices Weiser developed while at Parc were named pads, tabs, and boards. These items were basically nomadic devices designed to act very much like sticky notes, or paper. This research is considered some of the first attempts at creating ubiquitous computer environments where the computer was “invisible.” According to Weiser the first real ubiquitous devise was the live board first unveiled in 92 .
Although these devices are intended to be ubiquitous, they still do not fully realize the potential of the ubiquitous paradigm. Weiser stated he saw these devices merely as “a start down a radical direction” .
MIT’s media lab must also be given due credit in the research and development of ubiquitous computing. Although their website no longer makes any claims directly to ubiquitous computing, it is apparent from the their mission statement and their research that they have a strong interest in merging computing media into a seamless, almost invisible, integration with every day life. The “Things That Think” program at MIT Media lab is the best example.
From this body of seminal research, other loosely based endeavors into ubiquitous computing stemmed. In 1997 the Personal and Ubiquitous Journal was started. However, this journal seems to have a mobile device slant to it. In 1999 the first conference for ubiquitous computing was held in Karlsruhe, Germany. It has continued to be held annually ever since. The conference functions as a place for the somewhat scattered ubiquitous computing community to meet and present research.
There are even several companies that have been created as a result of ubiquitous computing, and almost all large computing companies have some research and development dealing with ubiquitous computing. Intel has its own research lab devoted to ubiquitous computing, which is located in Seattle, Washington. Another company, known as Maya, is currently active as a design consulting and technology laboratory. Almost everyone on their staff has a background in Human Computer Interaction and fanaticizes about ubiquitous computing scenarios. The company is defined by the idea of ubiquitous computing.
Current Ideas Similar to Ubiquitous Computing
Since the first manifestations of ubiquitous computing in the early 90’s, there are also other branches of computing that people should be aware of which are related to ubiquitous computing; however, are not truly ubiquitous.
Many times mobile devices such as cell phones or in automobile interfaces are associated with ubiquitous computing, but these devises are not purely ubiquitous devices because they are not ‘invisible’ to the user. They do, however, share in the omnipresent and interconnected aspects which are prevalent in ubiquitous computing, and it is evident that these areas of research could be seen as overlapping fields.
Another similar concept is pervasive computing, which connects to mobile computing and ubiquitous through its notion that computers should be omnipresent; however, it lacks the original concept that computers should be unnoticed. It is interesting to observe what industry and research have done to ubiquitous computing. The paradigm has definitely been adopted; however, it has been mixed with these other areas.
Weiser originally envisioned a world where the computer could disappear and people would no longer need to view the computer as an autonomous ; rather, computers would simply act as an unobtrusive tool to accomplish a task set. One issue with Weiser’s vision, however, is he does not ever completely define how to do this in his earlier papers. His tabs and pads are examples of computers, which function in a nontraditional way, but they are not truly invisible as he points out in his research . Maybe with time and persistent emphasis on the invisible, seamless interaction between human and machine, a more pure strain of ubiquitous computing, will develop. Currently it is yet to be seen; however, similar areas such as mobile computing and pervasive computing rely on the ubiquitous model as an ideal to strive for.
But Who Defines Ubiquitous?
Once again, Weiser’s vision is ambiguous when his explanations are further taken into question. If one closes their eyes and tries to imagine the fuzzy concept of interacting with an environment augmented by devices, which are invisible yet constantly enhancing our everyday experience, it seems possible; yet, there is something quite odd about it. There seems to be a small paradox in place. How is it possible that the devices, especially computing devices remain invisible to the user, yet still provide the user with an improvement over not using them? What if these invisible devices were suddenly gone, would their deconstruction lead to their visibility? It seems that it is almost impossible to have something, which has an invisible state, and simultaneously produces a visible benefit.
Weiser argued in his early papers that eye glasses are a good example of a ubiquitous tool . He thought they fit the paradigm, because a person simply has to look through them and they work. Glasses are such a seamless extension of the human that soon the person forgets eye glasses are even worn. Weiser also brings up the example of speaking or even typing. Eventually we no longer think about typing or speaking when expressing our ideas. Well we think about it, just not nearly as much as we do about the other thoughts which are being conveyed. In fact, when a person reads this paper, they are not thinking about the system of paper, words, and how these things come together to construct meaning, rather they are simply receiving a stream of information through an understood medium.
It is obvious there is a learning curve to reading just as there is to using a computer. At some point, people must analyze the tools they are provided with and learn to use them regardless of whether it is words, mathematics, bifocals, or even a computer. Weiser points this all out in his paper  and then proceeds to condemn personal computing because he believes it does not follow this pattern. He finds the computer system to be too distracting :
“Rather than being a tool through which we work, and so which disappears from our awareness, the computer too often remains the focus of attention.”
His perspective is interesting because it seems to shift itself based on the person and the time period. To a child growing up today, a computer is not all that much different than television or books were to a child who grew up 30 years ago. Inevitably man is the measure of all things. Things which man can not construct cannot be constructed and things which man cannot perceive at the moment …. Well it is arguable whether they even exist at the current moment. As Weiser states in a more recent paper :
“Writing and electricity become so commonplace, so unremarkable, that we forget their huge impact on everyday life. So it will be with UC.”
This is the problem with narrowing the ubiquitous paradigm and why the ubiquitous computing paradigm shouldn’t be too rigidly constrained although it is questionable whether Weiser would agree. In his more recent papers and presentations, it appears he has softened some in his view of what ubiquitous computing is. He appears to be accepting mobile devices, like cell phones as ubiquitous, although originally they may not have seemed transparent enough because of their fairly complex user interface.
It could also be argued that the user sees the cell phone merely as a telephone and nothing more though it is drastically different from the type of phone that was used 40 years ago. If time travel was possible and the same cell phone was placed in the hands of someone in the 1960’s, they would have had no clue what to do with it; therefore, the phone would not be considered ubiquitous. Thus, as cultures shift, the notions of what is and is not invisible technology shift as well.
The Current State of Ubiquitous Computing
The current state of ubiquitous computing lies within some hybrid form of mobile and pervasive computing, which seems like a very logical place for it to grow and develop into better things. As stated earlier in the paper, for ubiquitous computing to become a reality, several things need to be accomplished. In Weiser’s earlier papers, he would consistently lay out several key characteristics of mobile computing. Power consumption, user interfaces, wireless technology, and obviously cost, were all in need of development.
Power consumption is important because ubiquitous devices tend be mobile or consistently on. This means they have to operate on low power battery supplies if they are mobile. If they are not mobile, energy efficiency should still be an issue since ubiquitous device are always on.
Wireless technology is also important for ubiquitous computing because the ability for devices to talk or communicate with other devices is important. Many ubiquitous computers are embedded in other objects and their functionality will depend on other devices in the vicinity. In an ideal ubiquitous scenario, all the devices surrounding a user are connected with one another and information and data flows freely.
User interfaces vary within ubiquitous computing, but regardless the user interface may be the most important aspect of ubiquitous computing. Without intuitive interfaces, ubiquitous computing will never succeed, because it is not sensible for people to constantly be learning to carry out simple operations with complex or confusing interfaces on a multitude of devices. The interface should lend itself to the task and should not drastically alter how the task was performed before the computer assisted it.
A greater emphasis is currently placed on chips designed specifically for mobile computing. Intel’s M series of chips and Centrino technology are a good example. Rather than researching how to make devices run faster, Intel has taken considerable efforts to make its chips run more efficiently and become more mobile. Motorola, which at one time was heavily entrenched in chip manufacturing for personal computers, recently separated the semiconductor production section of its company into an independent corporation called Free Scale Semiconductors. Motorola has since become more focused on mobile computing.
Wireless technology was always a concern of ubiquitous computing. Weiser wanted to see several types of wireless connections implemented on ubiquitous computers :
“Present technologies would require a mobile device to have three different network connections: tiny range wireless, long range wireless and very high speed wired. A single kind of network connection that can somehow serve all three functions has yet to be invented.”
There is still not a do it all connection; however, wireless connectivity is common place through standards like 802.11 and short range standards like IrDA or Bluetooth. In 2004 IEEE developed a research group for 802.11n which would supply speeds of up to 504 Mb/s. These sorts of connectivity technology are currently being implemented in a wide range of devices from laptops to PDA’s to cellular phones.
Another problem Weiser addresses, relating to networking, was the problem with mobile IP addresses and assignment. It is not completely logical to assume that computers can have a static constant IP that defines them upon a give network since computers will not always remain on the same network. A business man may travel from New York to Hong Kong in a day, and every ubiquitous device he carries with him would as well. The problem is how to change the routing information so that his devices still work properly. In addition, if ubiquitous computing were ever implemented on a larger scale, more IP addresses would be needed. If every home appliance in every house had an IP address there would be an issue of running out of addresses. Since Weiser wrote his paper in ’94, IPv6 has been developed and addresses or solves many of these issues.
If ubiquitous computing is going to be achieved, cost becomes an important factor because these machines will need to be pervasive; therefore, it is not practical to assume people will be willing to pay high prices for the multitude of ubiquitous computers which are in place in everyday life. Thus, ubiquitous computing has to take cost into consideration and be innovative in producing cost effective solutions.
Current Examples of Ubiquitous Computing
There are numerous examples of Ubiquitous computing which could be discussed, however to be fair to all its applications and its scope, examples from different categories of ubiquitous computing will be shown and then discussed.
Calm Computing, the Ubiquitous Ideal
The best example of calm computing and also one of the first, is the “Dangling String,” which was constructed by artist Natalie Jeremijenko. The string is a long thin, round piece of plastic, which hangs from the ceiling and is attached to a small electrical motor. The motor is electronically attached to the Ethernet cable of an office or home network. As the electronic impulses, which are packets that are traveling the network, go by, the motor twitches. A very busy network will cause constant motor rotation while a quiet network will cause the motor to rotate very little. The motor’s rotation translates directly into the movement of the string. Weiser specifically likes this example because of its periphery nature. It is very unobtrusive yet still conveys a lot of information about the network, which is something usually unknown to most users.
The dangling string in action
Biometrics and Ubiquitous Computing
Biometrics is a way of identifying people based on physical or behavioral traits. It is a central issue in ubiquitous computing because it deals with privacy issues that accompany ubiquitous devices. How do ubiquitous devices know who their owners are and stop others from accessing sensitive information? Biometrics are a way to address privacy issues associated with ubiquitous computing. In addition, biometrics often lead to seamless ways of verification. Fingerprint scanners, for example, can be implemented with out a user even really realizing it. Simply touching a ubiquitous device maybe enough to identify the user.
The smart floor developed by Robert J. Orr and Gregory D. Abowd at Georgia tech, is a piece of flooring which tracks and uses information about the force of a persons foot steps. By looking at previous footstep samples, a computer programming running in the background could identify the person walking on the smart floor with 93% accuracy. Because a user never has to think about directly interacting with any interface, the smart floor is an unobtrusive method for user verification. Thus, it fits well into the ubiquitous idea.
Mobile Devices and Ubiquitous Computing
Mobile devices and the general area of mobile computing is one of the most common areas for ubiquitous computing to take place. Cell phones are already being seen as somewhat ubiquitous devices, which are starting to gain more complex capabilities allowing them to interact with their environments.
Place-Its , developed at University of California and Intel Research labs, are a good example of the power of mobile devices to enhance a users environment. The premise, behind Place-Its, is a cell phone can be used effectively as a device to handle reminders. Since many cell phones are equipped with positioning systems, reminders can be administered based on a cell phone’s given location. As pointed out in their paper , nuances of location are naturally used by people as reminders, so Place-Its is just augmenting this fact.
A user implements the system by setting three options, trigger, text and place. The trigger describes whether to issue the reminder upon arrival or departure. The text is the reminder itself, and the place is the specified location at which the reminder is issued. A place is defined when a person is physically located there; however once the person has defined a ‘place’ on their phone, it is saved indefinitely and can be used to schedule reminders from that point forward.
Overall this study appeared to be effective seeing that there was some benefit to the system and reminders based on location seemed very useful. Although cell phones were not 100% accurate at determining their positions, the prevalence and already accepted nature of cell phones seemed to make up for this fact.
As stated earlier, Networking is a crucial part of ubiquitous computing, thus the construction of invisible, flexible, noninvasive networks is extremely important. The papers on this issue range from low power, short range add hoc networks to long range wireless systems.
One of the more interesting networking ideas is the CarpetLan , which utilized the person as a connection between the object and the network. The interface to the network is located within the carpet, and the human body is actually performing like a cable that connects a touching device to the network. This is still currently very experimental and was expensive to implement costing approximately $5,000 for one installation; however, there are some very notable benefits. CarpetLan acts as a medium for networking and simultaneously can create user position information, which is beneficial because it takes away the requirement for a separate positioning device.
Position is also integral to ubiquitous computing because a person’s or device’s position needs to be known to decide if they will be in position to communicate with one another. Devices or humans that are not in proximity with one another do not need to waste resources trying to stay networked. Likewise, devices need to be able recognize the presence of users or other devices that are leaving or entering their area.
At the 2005 UbiComp in Tokyo Japan, a demo was given for a device called GETA Sandals . These sandals are used as a means to obtain positioning information with out very much infrastructure. Unlike other positioning systems, they do not require wi-fi points or ultrasound to aid in determining their location. They keep track of their own position by calculating the displacement vectors of each foot through sensors which are placed inside each sandal. These vectors are added up, and from these summations, distance and position can be determined.
GETA Sandals in action
Other forms of position tracking are also common but not as novel. Many Mobile Devices already have GPS enabled or something equivalent. However, these are not as ubiquitous as the GETA Sandals, because they are an extra device which requires extra attention. The GETA Sandals are a seamless integration of a computing environment into the natural human environment, and thus, they are truly ubiquitous.
Wearable Ubiquitous Computing
Wearable computing represents the all present aspect of ubiquitous computing and is also an attempt to allow computing to seamlessly fit into the environment. The previous example of the GETA sandals is an excellent example of wearable computing. There have also been other implementations of wearable computing outside of the realm of positioning.
The View Pointer  is an example of a wearable ubiquitous device. Developed at Queens University under John D. Smith, Roel Vertegaal, and Changuk Sohn, the View Pointer allows for seamless interaction with objects. By placing IR tags in the environment and having a tiny eye tracking camera added to a hands free cell phone device, they system can determine when a user makes eye contact with a specific object. Each object’s IR tag flashes at a different rate, which uniquely identifies the object. The flashing can also be seen as a way to transmit binary information from object to user. Thus a device can seamlessly provide the user with information about itself, for example a web page URL.
The IR tags are about the size of a dime and can operate for extended periods of time on a very limited power source; thus, they are not at all obtrusive. However, because the headset must be worn, it is not completely ubiquitous. Unlike sandals, most people do not currently wear headsets; however, this may change in the future and can simply be seen as an example of ubiquitous scenarios redefining themselves with time. In addition, eye track is an excellent way to achieve a ubiquitous interface since eye movement is such a natural part of the human communication and interaction. Another benefit of this system is its low cost. The system was built with off the shelf parts from Radio Shack for a very reasonable price.
An extension of the wearable device concept is transhumanism, which is a futuristic concept where computers are transplanted directly into the user’s body. This goes beyond the realm of ubiquitous computing, because although seamless and ever present, it brings up many ethical issues. The alteration of ones body can be seen as an evasive procedure, which is not something originally intended by Weiser. His main goal for ubiquitous computing was making the computer fall invisibly into the background environment; it was not for the computer to become integrated directly into the human.
Visual interfaces, although not really required for ubiquitous computing, are useful if implemented properly. There are certain tasks that rely heavily upon visual interaction, for example reading maps.
At the 2005 UbiComp Conference in Tokyo Japan, a demo for an interactive visual map was carried out. Deemed a “Computational Augmented Table Top”, the system consisted of a table top display running a touch sensitive computer application in the background. Multiple users were able to interact with image and each user could be uniquely distinguished. This was possible by implementing Diamond Touch, which is a program that incorporates a touch sensitive display coupled with a software SDK for application development .
This application has a very natural interface that allows people to interact simply by touch the display. Touch is obviously a natural human gesture. By taking advantage of this, a very seamless visual interface is created for multi-user interaction.
Ubiquitous computing is a new paradigm in computing, which has started to come to fruition over the recent years. Furthermore, because of economic reasons, social reasons and anthropological observations, the possibility for ubiquitous computing to rise as a dominant and common area of computing is becoming more realistic. Most major computing companies have started research and development in areas of ubiquitous computing and it is also on the rise in academic departments. At the heart of ubiquitous computing is an interest in user centric design, which facilitates our everyday lives, and augments our environments. Anyone interested in HCI will hopefully see the range of possibilities which are created through the development of ubiquitous scenarios. This paper has been an attempt at explaining why ubiquitous computing is on the rise, summarizing what it is, describing its current state, and finally giving some examples, which reflect ubiquitous computing’s range and scope.
Mark Weiser and John Seely Brown, The Coming Age of Calm Technology, Xerox PARC, October 5, 1996
Weiser, The world is not a desktop, Perspective Article for ACM Interactions
Weiser, Ubiquitous Computing, IEEE computer, October 1993
Weiser, Some Computer Science Issues in Ubiquitous Computing, CACM, July 1993
Weiser, The computer for 21st Century, Scientific America, September 1991
Weiser, Gold, Brown. The origins of ubiquitous computing research at PARC in the late 1990’s.[online] available at http://www.research.ibm.com/journal/sj/384/weiser.html
Masakazu Furuichi, Yutaka Mihori, Fumiko Muraoka, Alan Esenther,and Kathy Ryall, DTMap Demo: Interactive Tabletop Maps for Ubiquitous Computing. UbiqComp 2005
Timithy Sohn, Kevin A. Li, Gunny Lee, Ian Smith, James Scott, and William G. Griswold, PlaceIts: A Study of Location Based Reminders on Mobile Phones. UbiComp 2005
Shun-yuan Yeh, Keng-hao Chang, Chon-in Wu, Okuda Kenji, Hao-hua Chu, Geta Sandals: Knowing Where You Walk To. UbiComp 2005
John D. Smith, Roel Vertegaal and Changuk Sohn, View Pointer: Lightweight Calibration-Free Eye Tracking for Ubiquitous Handsfree Deixis. ACM Press 2005(53-61)
Stokes, Jon. 2003. Understanding Moores Law [online]. Arstechnica; available from
Masaaki Fukumoto, Mitsuru Shinagawa. CarpetLAN: A Novel Indoor Wireless(-like) Networking and Positioning System. UbiComp 2005
Brooks, Fredrick P. “No Silver Bullet: Essence and accidents of Software Engineering” [online] at http://www.lips.utexas.edu/ee382c-15005/Readings/Readings1/05-Broo87.pdf