Department of Computer Science & Software Engineering
University of Wisconsin - Platteville
Platteville, WI 53818
Abstract Synchronous and asynchronous models have been around since the first computer, that is to say it’s how they operate. Although computers today are mostly synchronous, asynchronous logic is still prominent within most systems; however there is still a fine line between the two to determine which is better. There are many different areas in computer science that can be related to synchronous and asynchronous models both physically (hardware), and logically (software). Some areas include transmission, IO, programming languages, circuits and data.
Introduction Synchronous and Asynchronous models affect computers in so many different areas. They are more of a way to do something rather than their own individual aspect of computer science. Because of that, we can see how synchronous and asynchronous models affect transmission, IO, programming languages, circuits and data.
First, let’s define synchronous and asynchronous. Synchronous can be defined as two or more events happening at the same time. Similarly asynchronous would mean the opposite. However, one should not think of asynchronous as two or more events happening at opposite times where the events alternate back and forth, but rather an arbitrary amount of time between each event. For example, when two clocks are ticking to the same second at the same time, they would be synchronized, but at any other time (where the time different is arbitrary) the two clocks would not be synchronized and would be classified as asynchronous.
Another way to look at synchronous and asynchronous models is through communication. Figure 1 on the next page shows synchronous and asynchronous communication between two people. Imagine both parties in each depiction are trying to communicate with each other. In the synchronous model the two people are communicating verbally, therefore the communication is synchronous because it is happening at the same time. The asynchronous model shows two people conversing through a different medium than speech. They are communicating through a note pad. This is asynchronous because the communication is not happening at the same time. Between each message, or the passing of the notepad, there is an arbitrary amount of time that takes place. Within this time of this example, several different things take place, the writing of the message, the passing of the notepad, and reading of the message. All those things have a different time that they could take up, which creates this arbitrary about of time for each communication. So even though the two people in the asynchronous model are conversing together, the communication is not happening at the same time.
Figure 1: Synchronous and Asynchronous Communication
Basics of Synchronous and Asynchronous
Most actions and operations that take place in computers are carefully controlled and occur at specific times and intervals. These operations are called synchronous actions and they are measured against a time reference or a clock signal. This means that communication in a computer, whether it is between computer components or computer operations, is calculated as “good” or “bad” depending on the clock. For example if you have a sender and a receiver, the sender and receiver will communicate and agree upon a certain “time frame” where the receiver will expect to get something from the sender. If the receiver does not get anything within that time limit then a time-out error occurs and counter measures will begin to take place. That would be an example of “bad” communication.
A basic synchronous system will communicate before any action or operation takes place. Within that communication a synchronous system will synchronize the signal clocks on both sides before transmission begins, reset their numeric counters and may negotiate things like error correction, such as a time-out error, and compression. Since the receiver clock synchronizes to the sender clock in a synchronous model this will allow for a higher data transferring rates. In most computers operations are done synchronously because of the higher data transfer rate.
An asynchronous action would not be measured against a time reference or a clock signal. Asynchronous systems will not do things that synchronous systems do such as synchronizing the signal clocks, resetting their numeric counters and other things like that. The way that asynchronous systems handle those issues is though each byte of data sent.
In an asynchronous system each byte holds its own signal bits. In data transferring, each byte of data is enclosed with starting and stopping bits. This allows the receiver to understand what is being sent to it and also allows the receiver to coordinate its clock for receiving the data. Below in Figure 2 is an example of an asynchronous transfer byte.
Figure 2: Asynchronous Transmission Byte
As you can see in Figure 2, in the center colored in pink is the bit of data to be transferred. In front of the bit, or to the left of the bit, is the start bit. The start bit will notify the receiver that it is about to start receiving data. This will also tell the receiver to set up its clock and reset numeric counters accordingly to process and store the “body” of the byte being sent. The stop bit or bits are used in a similar fashion. They notify the receiver that data is done being sent and there is no more data that is associated with that byte coming anymore. Depending on different protocols the byte may have one or more stop bits.
Asynchronous systems respond to signals. A collection of control signals are used to notify intent in an information exchange. Each party has to wait for the signal to change state before responding. Asynchronous transmission is sometimes called "best effort" transmission because one side simply transmits, and the other does its best to receive. Any lost data is recovered by a higher level protocol.
In the synchronous model for transmission, the clock signals of the receiving and transmitting side are synchronized and the data flow is continuous. This allows for a quicker transfer rate but can also cause more errors. Some of those errors are the clocks can get out of sync, dropped packets and lost bits, and flipped bits or just corrupt bits in general. Two ways to avoid or fix those errors include, resynchronizing the clocks and using check digits.
Resynchronizing the clocks just means going through that initial process where the receiver and the sender communicate before sending the data. In this initial communication process the sender and receiver will synchronize the signal clocks on both sides before transmission begins, reset their numeric counters and may negotiate things like error correction, such as a time-out error, and compression.
Check digits are a form of error checking. There are several different check digit methods to use. Just a few are ISBN10, ISBN13, EAN and UPC. ISBN stands for the International Standard Book Number. The number following it, 10 and 13, are how many digits it uses. EAN-13 stands for International Article Number, formally known as European Article Number and the number is how many digits it had. UPC stands for the Universal Product Code. Of course there are other different types of check digits too.
ISBN-10 uses a nine digit data number and calculates a check digit to be the tenth digit of the number. It uses a mathematical approach using the 9 digits. The first step of the calculations is to multiply each digit by an integer weight. These weights range from ten to zero going from left to right. For example the first digit on the left will be multiplied by ten, and then the second digit will be multiplied by nine and the third by eight and so on. Then after all the multiplication is done, the next step is to add all of those products to get one number. The remainder of the sum, with respect to 11 is calculated. The whole number then becomes the check digit. If we take 0-306-40615-x where x will be the check digit, first we would multiply each digit from left to right by the correct weights and then add them all giving us 130. Then we find the remainder of 130 with respect to 11 giving 2.9, because 11 times 11 + 9 is 130. Thus 2 is our check digit completing the ISBN-10 number: 0-306-40615-2.
ISBN-13 is different from ISBN-10. For starters the number is thirteen digits long and the steps for calculation are not the same. However, similar to ISBN-10, ISBN-13 does use integer weights in the calculation. The first step is to multiply all the numbers in the odd position by one and all the numbers in the even position by three. Starting on the left the first number would be multiplied by one, the second number multiplied by three, the third number multiplied by one and so on. The all the products get added together then divided by ten. This time take the remainder and subtract that from ten giving the check digit. Using 978-0-306-40615-x as an example where x is the place for the check digit, we would get the value of 93 after all of the multiplying and adding. Taking 93 and dividing by 10 (or 93 modulo 10) we get 3. Finally take 3 from 10 giving 7. 7 is the check digit which makes our example 978-0-306-40615-7
EAN-13 is similar to ISBN-13. It uses a thirteen digit number where the last digit is the check digit. It also uses integer weights of 1 and 3 that alternate between digits. The difference is at the last couple of steps. Instead of finding the remainder of dividing by ten, find the nearest multiple of 10 that is greater than or equal to the sum. And then take that number and subtract the original sum from that. For example using 400638133393x where x is the position of the check digit, the sum of the multiplied weights for the specific digit position is 89. The closest multiple of 10 that is greater than or equal to 89 is 90. So then we take 90 minus 89 and get 1 to be the check digit. So the final number for EAN-13 is 4006381333931.
UPC is the most different out of the 4 check digit methods discussed above. This method can use any length of number and does not use integer weights. The first step is to add all of the odd positioned digits from left to right. Then take that sum and multiply it by three. Next add all the digits in the even positions to that product. Finally find the remainder of that sum divided by ten (or take that sum modulo ten). If the result is not zero, subtract the result from ten. For example if the number is 03600092415x where x is the position for the check digit. We would first add all of the odds and multiply the sum by 3 giving us 42. Then add all of the even positioned digits to that product give us 58. Next take 58 modulo 10 to get 8 and finally subtract that from 10 giving the check digit of 2. Thus the number with the check digit will be 036000924152.
In an asynchronous systems use what are called parity bits. Parity bits are similar to check digits in the sense that they are used for error checking. However, unlike check digits that determine if numbers are correct, parity bits will check to see if bits were flipped or lost. Referring back to Figure 2, the parity bit would be located between the byte and the stop bit or bits. This means that each segment or byte of data that is sent would have its own parity bit. Parity bits have two descriptions; they can be even or odd. The objective in determining the value of a parity bit is to count the number of 1’s are in the byte. Using 0 for true and 1 for false, if there were an even amount of 1’s in a byte and an even parity bit was being used then the parity bit would be a 0. Likewise if there were an odd amount of 1’s in a byte and an odd parity bit was being used then the parity bit would also be 0. However if there were an odd amount of 1’s in a byte and an even parity bit was being used then the parity bit would be a 1 and if there were an even amount of 1’s in a byte and an odd parity bit was being used then the parity bit would also be a 1. The table below shows an example using 7 bit data.
Table 1: Parity Bit Reference Table
7 bits of data
(number of 1s)
8 bits including parity
In Table 1 there is the 7 bit example on the left and two columns that show what the value of the parity bit would be whether an even parity bit was used or an odd parity bit was used. In the first example of all zeros there are zero ones in the 7 bits. Therefore if an even parity bit was used the bit would be 0 and if an odd parity bit was used the bit would be 1. Likewise in the second example of 1010001, there are three ones so in an even parity bit that would be false so the even parity bit would be 1 and for an odd parity bit that would be true so the bit would be 0.
I/O Even in I/O there can be a synchronous method and an asynchronous method. In Figure 3 below it shows the process of an I/O operation using a synchronous method.
Figure 3: Synchronous I/O
As shown in Figure 3 the center line that is labeled with T0 T1 T2 and T3, they represents time passing. The lines above are processes of the thread and the line below is a process of the kernel. To start with the thread starts an I/O operation and then immediately goes into a waiting state. While the thread is in its waiting state it’s using up computer resources. Next the I/O starts and processes the request and then signals back up to the thread. The thread then leaves its waiting state and continues. The key part to take note of is that while the thread is in its waiting state it’s using up computer resources to do nothing but wait. This is why synchronous I/O is also called blocking I/O because it blocks others from using the resources that is has. Below is a similar figure but of asynchronous I/O.
Figure 4: Asynchronous I/O
As you can see in Figure 4, between points T1 and T2 of the thread, there is a line unlike the synchronous model that shows no line. The line represents the thread doing work. So in an asynchronous I/O instead of going into a waiting state when the thread starts an I/O operation, the thread starts to process another job. Of course when this happens the thread cannot access the I/O but it will be doing something instead of waiting. This is the reason why asynchronous I/O is sometimes called non-blocking I/O or overlapped I/O.
Programming Languages Synchronous programming languages are used for the development of reactive systems. Reactive systems interact with their environment at a speed set by the environment, which is also why they are sometimes called real-time systems. An example of a real-time system would be a flight simulator. With a synchronous programming language it makes it easier to program timing responses to what happens in the environment. A synchronous program responds to its environment through logical ticks. Logical ticks are sequences of ticks where at each edge of a tick all computations are assumed instantaneous thus allowing for real-time speeds. A few of the first synchronous programming languages were Esterel, Lustre and Signal.
Circuits A disadvantage to a synchronized circuit is that it can only be synchronized when all the parts can “see” the clock at the same time. As circuits get larger the components get further spread out. The propagation time of the clock signal to each of the components could pose an issue in the future. This potentially puts a cap on how big circuits can really be. Heating is another issue for synchronous circuits. In particularly the clock driver constantly switches which dissipates a lot of heat.
Some benefits to an asynchronous circuit are that different components can run at different speeds whereas in a synchronous circuit, all components must be synchronized to a central clock. Another benefit is that the asynchronous circuit can theoretically speed up. For example the estimated time for synchronous processes is the sum of the time of the worst case scenarios for the slowest process. In other words the best guess is the worst case. In asynchronous circuits the processes start one after another and don’t have to wait for the clock.
References  Bartha, Mikl´os and Cirovic, Branislav (May, 2005) On some equivalence notions of synchronous systems. Retrieved from http://www.cs.mun.ca/~bartha/linked/afl05.pdf
 Butterfield, Andrew (May 15, 2007) Unifying Synchronous Systems. Retrieved from https://www.scss.tcd.ie/Andrew.Butterfield/USS/Research-Programme-Summary.pdf
 Davis, Al and Nowic, Steven (1997) An Introduction to Asynchronous Circuit Design. Retrieved from http://www1.cs.columbia.edu/async/publications/davis-nowick-intro-tr.pdf
 DeMone, Paul (n.d.) The Myths and Realities of Overclocking. Retrieved from http://www.realworldtech.com/page.cfm?ArticleID=RWT122199000000
 Edwards, D. A., and Toms, W. B. (n.d.) The Status of Asynchronous Design in Industry. Retrieved from http://www.scism.lsbu.ac.uk/ccsv/ACiD-WG/AsyncIndustryStatus.pdf
 Fairhurst, Gorry (n.d.) Asynchronous Communication. Retrieved from http://www.erg.abdn.ac.uk/~gorry/eg2069/async.html
 Meseguer, Jose and Sha, Lui (Nov 29 2011) Physically Asynchronous Logically Synchronous Architecture. Retrieved from http://cs.illinois.edu/news/2011/Nov29-01.
 Microsoft Windows Dev Center (n.d.) Synchronous and Asynchronous I/O. Retrieved from http://msdn.microsoft.com/en-us/library/windows/apps/hh464924.aspx
 Tanenbaum, Andrew S. (1989). Computer Networks. 2nd ed. Englewood Cliffs, NJ.
 Wikipedia (2012) International Article Number (EAN). Retrieved from http://en.wikipedia.org/wiki/International_Article_Number_(EAN)
 Wikipedia (2012) International Standard Book Number. Retrieved from http://en.wikipedia.org/wiki/International_Standard_Book_Number
 Wikipedia (2012) Universal Product Code. Retrieved from http://en.wikipedia.org/wiki/Universal_Product_Code