Electric vehicle (EV), also referred to as an electric drive vehicle



Download 492.03 Kb.
Page6/10
Date19.10.2016
Size492.03 Kb.
#4572
1   2   3   4   5   6   7   8   9   10

Data traffic networks


In the days before mass physical inter-connectivity of computers began, data traffic networks consisted of people physically carrying storage media between devices; the 'sneakernet' predated the 'internet'. From its humble beginnings the internet has grown ferociously in terms of the number of inter-connected devices and in the amount of data traffic conveyed through its uncountable wires, switches, routers, bridges and hubs.

For all its growth over the last twenty years, the basic means of transporting information through the internet has remained as sending 'packets' of data through the connection media. This media consists of copper and fibre-optic cables with hubs, switches and routers providing the 'glue' between all the origins and destinations within the internet. The internet operates on what is known as a 'best guess path determination' or 'best effort delivery' system. The majority of the internet routes packets on a ‘by hop’ basis, that is to say, no node knows how to reach every other node in the network. How then does a packet move through the network? When a packet arrives at a router, route determination algorithms decide which adjacent node the packet should be forwarded to. It is this on-the-fly decision-making process that is the crux of the internet and is essentially brought about by a raft of cooperating software algorithms called protocols. Each class of protocol has a specific, defined task and each algorithm within that class will accomplish that task. But no two algorithms (eg two of the same protocol built by different companies) may actually accomplish the task in exactly the same manner. Hence the selection of a set of protocols can be critical to the operation of a localised network eg within an organisation, university etc.

The mechanisms ensuring the internet can be grouped into four separate layers:


  1. Application layer: this is the software running on a computer that wishes to communicate with some other computer. For example, a web browser or an email program would all work at the application layer.

  2. Transport layer: this layer ensures the end-to-end connectivity of data. If necessary it detects when data has been lost and retransmits it.

  3. Internet layer: this layer ensures that a packet can get between a source and a destination. It is in charge of working out which direction packets need to go at each node in order to get to their destination.

  4. Host-to-network layer: this layer is the physical hardware which connects two adjacent machines, for example, a satellite link, an optical cable or simply some wires. It provides a basic level of connection between two machines which are logically adjacent to each other.

The route decision-making protocols running in the routers use the information gathered about the network topology by the second group of protocols to make appropriate decisions on which next hop to send a packet to. The algorithms in these first two protocol groups are becoming more and more sophisticated as the internet grows since one of the main requirements for network installation is to provide the greatest bandwidth (thus allowing the highest packet throughput) at the least cost. The carrier protocols set up the different means of information transfer. Protocols in this group differ due to different physical media or the reliability requirements of the data transfer. One protocol may operate by setting up a complete, known path from origin to destination before the first data packet in the transmission starts its journey. An alternative to ‘by hop’ routing which is used in the majority of the internet is to set up a 'virtual' path that is created temporarily and its constituent links may be removed as soon as the last of the set of data packets has completed its journey across that link.

One level of congestion control on the internet is provided by Transmission Control Protocol (TCP). The mechanism is as follows. When the network is congested then packets are lost or ‘dropped’. Packet loss is part of normal internet operation and should not be thought of as a failure of the network. A receipt mechanism (known as acknowledgements or ACKs) ensures that lost packets are retransmitted. However, this mechanism also allows a crude level of congestion control. If an ACK is not received for a packet then this packet is assumed lost and must be retransmitted. However, in addition, the sender reduces the rate at which data is sent under the assumption that this packet loss was caused by network congestion. In this way, a network, to a certain extent, is resilient against overloading.

To achieve this, network designers use systems like Data Stream Management Systems (DSMS) and Simple Network Management Protocol (SNMP) to monitor the movement of data packets within the network (Vogiatzis, Ikeda, Woolley and He, 2003; Babcock, Babu, Datar, Motwani and Widom, 2002; Arasu, Babcock, Babu, McAlister and Widom, 2002; Babu, Subramanian and Widom, 2001; Ikeda and Vogiatzis, 2003). Such systems allow network designers and network administrators the ability to manage the data traffic within the network and to formulate new techniques to improve the movement of data within those networks.

The network management protocols do not in themselves assist in the transport of packets over the network (except for those protocols that detect and correct collisions) but are present to assist those people administering the network. There is a lot of traffic on any network that is required to keep the network running well. There is a trade-off here such that if too much 'support' data exchange occurs, this negatively impacts on the available bandwidth for the data to be transferred.

Network controllers (routers and switches) and not the data packets themselves determine the movement of data through the network though the data packets contain their origin and destination network addresses. The routers know the destination of each packet that is received but may be unaware of exactly where that destination is. The cooperating set of protocols enables the routers to locate the next adjacent machine that is closer to the destination. In a properly configured network, the packet should eventually reach its destination.

Another 'simplifying' factor is that data packets do not choose their 'mode' of travel, there is no volition to use 'public' fibre backbones over 'private' ones or to use a particular carrier protocol. The various network topology information-gathering protocols can dynamically assign costs for routes between nodes of a network. Costs can be a conglomerate of different metrics such as bandwidth, reliability, speed, number of routers to the destination, load etc. As would be expected these costs are used by routers, or more specifically the route decision-making protocols operating within the routers, to choose a path. Further, network administrators can adjust the weighting given to each metric component of the cost. Hence the dynamic routing protocols operating within routers use this cost information to make path-determination decisions for each data packet.

Continued improvement in the optimising algorithms on which the network depends and improved performance of routers in particular (ie speed of decision-making and reduced propagation delay) have resulted in reduced data transmission times. Couple reduced transmission times with a greatly reduced total cost of ownership and more people are using the internet and data networks. Despite the increase in users, the performance of the network has not generally degraded (except on the odd occasion when the system is overloaded).



Download 492.03 Kb.

Share with your friends:
1   2   3   4   5   6   7   8   9   10




The database is protected by copyright ©ininet.org 2024
send message

    Main page