The data from multiple application layer protocols cannot just be passed on to the lower layers in a single block, as this would lead to severe delays in sending data to the NIC.
To visualise this problem, imagine you are heading to the till in a supermarket. Some shoppers have heavily loaded trolleys, and it takes several minutes to scan, bag and pay for all their items. Customers with fewer items use the ‘ten items or less’ queue, and although there may be more people in line, they are each served much more quickly than those in the queue for trolleys. Now imagine there is only one queue, and the customers with a few items are forced to queue alongside those with a trolley. The customers with only a few items have to wait longer to be served.
This is exactly the same problem faced by the application layer protocols, as they all send different sized blocks of data to the NIC. FTP may try to send a file measured in megabytes, whereas SMTP may send an email of only a few kilobytes. If FTP gets its data to the NIC first, then transmission of the email is substantially delayed.
One of the primary jobs of the transport layer is to divide all the data received from the application layer protocols into equal segments, which can then be mixed together (multiplexed) and passed to the next layer for processing. This process ensures that all protocols receive an equal share of the capacity of the device’s NIC.
Once the data is divided into segments it needs to be tracked so that if they are delivered out of sequence, or some get lost, then steps can be taken to re-order or recover them. The transport layer thus encapsulates the segments it creates with a header, which contains sequence numbering to allow for segment tracking.
When segments are received, they need to be placed in the correct order to recover the original data that was sent, but this takes time, and if your device is receiving segments from multiple applications it can get extremely busy and may not be able to cope, leading to data loss. To prevent this, the transport layer can implement flow control, which allows a device receiving segments to limit the number of segments that are sent to it from a transmitting device.
The two most common transport layer protocols of TCP/IP are Transmission Control Protocol (TCP) and User Datagram Protocol (UDP).
Both protocols manage the communication of multiple applications.
The differences between the two centre on the specific functions each protocol implements.
TCP provides reliabledelivery of data, therefore it supports all the functions described above – segmentation, multiplexing, sequencing and flow control. The disadvantages of using TCP is that, due to its complexity, it can introduce unwanted delays between communicating devices.
UDP provides rapid delivery of data, but without reliability. UDP only provides segmenting and multiplexing of data received from the application layer. Data from communication programs using voice and video are typically intolerant of delay and therefore use UDP.
Start of Figure
Figure 4
End of Figure
Both TCP and UDP keep track of the application layer protocols they handle by using port numbers, which act like doorways between the transport and application layers. These range from 1 to 65535, and protocols are associated with individual port numbers:
SMTP: port 25
POP3: port 110
HTTP: port 80
FTP: ports 20 and 21
How ports operate is slightly more complex than indicated above, as only server processes use fixed, or well-known ports. Client processes (e.g. a web browser) using HTTP will select a random, unused port. This process will be examined in more detail in a later module.