Exam 2 Review Chapter Network and Transport Layer



Download 44.54 Kb.
Date01.06.2018
Size44.54 Kb.
#52479

05/30/18

Exam 2 Review
Chapter 5. Network and Transport Layer
Review focuses:

  1. Three functions of TCP/IP:

  • Packetizing: breaking data into packets, numbering, error control, reassembling - TCP

  • Addressing: determines the correct network layer and data link layer addresses - IP

  • Routing: determines where the message should be sent next on its way to its final destination. – IP




  1. Addressing

  1. Internet IP addresses:

  • address assigning

  • dynamic addressing: bootstrap Protocol, Dynamic Host Control Protocol (DHCP)

  1. Address classes: A, B, C, D, E

  2. Subnet and subnet mask

- Why need subnet mask and how to design a subnet mask

  1. Three levels of addresses:

application layer address (domain name), network layer address, data link layer address

  1. Address resolution: server name resolution, data link layer address resolution

  2. Four pieces of information for a client: its IP address, subnet mask, DNS server IP address, gateway IP address

  3. Multicasting




  1. Routing

  1. Dynamic routing: routing table update, RIP, ICMP, and OSPF

  2. Connectionless vs. connection-oriented (virtual circuit)

  3. Traffic types: real-time, elastic

  4. QoS: Quality of Service (QoS) is the idea that transmission rates, error rates, and other characteristics can be measured, improved, and, to some extent, guaranteed in advance.

  5. QoS routing: a special connection-oriented dynamic routing in which different data flows are assigned different priorities and classes of service.

  6. Network congestion: what is it?

  7. Network service types: Best effort, IntServ and DiffServ:


Key terms:

Address resolution, routing, application layer address, network layer address, data link layer address, multicasting, subnet mask, virtual circuit, domain name, name server, QoS routing, routing table, IntServ, DiffServ, flow control, traffic congestion.



Chapter 6. LANs





  1. Network components

    1. Computers (server, client)

    2. NIC

    3. Cable

    4. NOS

    5. Hub

  2. Network topology

    1. Physical vs. logical

  3. Ethernet

    1. Standard: IEEE 802.3

    2. CSMA/CD

    3. 10Base-T vs. 100Base-T Ethernet (3 versions)

    4. New types of Ethernet: 1000Base-T (1GbE), 10GbE, 40GbE

  4. Wireless LAN

    1. WLAN and LAW

    2. Standard: IEEE 802.11b, IEEE 802.11a and IEEE 802.11g

    3. MAC protocol: CSMA/CA, (contrast with CSMA/CD)

    4. Hidden node problem and how it is solved

  5. How to improve the performance of a LAN


Key terms:

NOS, CSMA/CD, CSMA/CA, 10Base-T, 100Base-T, 1000Base-T, hidden node problem.




Chapter 7. Backbone Networks





  1. Backbone devices

    1. Hub

    2. Bridge

    3. Switch (Layer 3 and layer 2 switches)

    4. Router

    5. Gateway

  2. Contrast between different kinds of networking devices

    1. Switch vs. hub

    2. Switch vs. bridge

    3. Bridge vs. router

    4. Router vs. gateway

    5. Router vs. switch

  3. Backbone design

    1. Three layers of architecture: access layer, distribution layer, and core layer

    2. Four architectures:

- Routed backbone – using routers

Advantage – clearly segment each part of the network

Disadvantage – Delay, and more management

- Bridged backbone – using bridges, not popular any more

Advantages – cheaper, simpler

Disadvantages – difficulties in management

- Collapsed backbone –using switches, is most commonly used.

Advantages - Better performance, Fewer network devices are used

Disadvantages – switch problem may fail whole network, more cabling work

Two types

Rack-based collapsed backbone

Chassis-based collapsed backbone

Virtual LAN (VLAN)


  1. Backbone network technologies

    1. FDDI

      1. 100Mbps

      2. Dual-ring structure

      3. Token passing MAC protocol

    2. ATM

      1. 155/622 Mbps

      2. Fixed size cell/packet (53 bytes)

      3. Connection-oriented service

      4. Two types of virtual circuits: permanent virtual circuit (PVC), switched virtual circuit (SVC)

      5. Packet conversion between Ethernet and ATP – LAN encapsulation (LANE)

      6. Four service classes: CBR, VBR, ABR, UBR

    3. Fibre Channel


Questions/Answers (Not necessarily cover all topics):


  1. How is TCP different from UDP?

TCP is a connection-oriented protocol. UDP is a connection-less protocol. What are the differences between connectionless and connection-oriented routing?


Connection-oriented routing sets up a virtual circuit between the sender and receiver. In this case, a temporary virtual circuit is defined between the sender and receiver. The network layer makes one routing decision when the connection is established, and all packets follow the same route. All packets in the same message arrive at the destination in the same order in which they were sent. In this case, packets only need to contain information about the stream to which it belongs; sequence numbers are not needed, although many connection-oriented protocols include a sequence number to ensure that all packets are actually received.
Connection-oriented routing has greater overhead than connectionless routing, because the sender must first “open” the circuit by sending a control packet that instructs all the intervening devices to establish the circuit routing. Likewise, when the transmission is complete, the sender must “close” the circuit. Connection-oriented protocols also tend to have more overhead bits in each packet.
Connectionless routing means each packet is treated separately and makes its own way through the network. It is possible that different packets will take different routes through the network depending upon the type of routing used and the amount of traffic. Because packets following different routes may travel at different speeds, they may arrive out of sequence at their destination. The sender’s network layer therefore puts a sequence number on each packet, in addition to information about the message stream to which the packet belongs. The network layer must reassemble them in the correct order before passing the message to the application layer.


  1. How does TCP establish a connection?

TCP sets up a virtual circuit between the sender and the receiver. The transport layer software sends a special packet (called a SYN, or synchronization characters) to the receiver requesting that a connection be established. The receiver either accepts or rejects the connection, and together, they settle on the packet sizes the connection will use. Once the connection is established, the packets flow between the sender and the receiver, following the same route through the network.




  1. What is a subnet and why do networks need them?


Each organization must assign the IP addresses it has received to specific computers on its networks. In general, IP addresses are assigned so that all computers on the same local area network have a similar addresses. For example, suppose a university has just received a set of Class B addresses starting with 128.184.x.x. It is customary to assign all the computers in the same LAN numbers that start with the same first three digits, so the Business School LAN might be assigned 128.184.56.x while the Computer Science LAN might be assigned 128.184.55.x (see Figure 6-8). Likewise, all the other LANs at the university and the backbone network that connects them, would have a different set of numbers. Each of these LANs are called a TCP/IP subnet because they are logically grouped together by IP number. Knowing whether a computer is on your subnet or not it very important for message routing.


  1. How does TCP/IP perform address resolution for network layer addresses?


Server name resolution is the translation of application layer addresses into network layer addresses (e.g., translating an Internet address such as www.cba.uga.edu into an IP address such as 128.192.98.3). This is done using the Domain Name Service (DNS). Throughout the Internet there are a series of computers called name servers that provide DNS services. These name servers run special address databases that store thousands of Internet addresses and their corresponding IP addresses. These name servers are in effect the "directory assistance" computers for the Internet. Any time a computer does not know the IP number for a computer, it sends a message to the name server requesting the IP number.
When TCP/IP needs to translate an application layer address into an IP address, it sends a special TCP-level packet to the nearest DNS server. This packet asks the DNS server to send the requesting computer the IP address that matches the Internet address provided. If the DNS server has a matching name in its database, it sends back a special TCP packet with the correct IP address. If that DNS server does not have that Internet address in its database, it will issue the same request to another DNS server elsewhere on the Internet.
Once your computer receives an IP address it is stored in a server address table. This way, if you ever need to access the same computer again, your computer does not need to contact a DNS server. Most server address tables are routinely deleted whenever you turn off your computer.


  1. How does TCP/IP perform address resolution for data link layer addresses?

To send a message to a computer in its network, a computer must know the correct data link layer address. In this case, the TCP/IP software sends a broadcast message to all computers in its subnet. A broadcast message, as the name suggests, is received and processed by all computers in the same LAN (which is usually designed to match the IP subnet). The message is a specially formatted TCP-level request using Address Resolution Protocol (ARP) that says “Whoever is IP address xxx.xxx.xxx.xxx, please send me your data link layer address.” The TCP software in the computer with that IP address then responds with its data link layer address. The sender transmits its message using that data link layer address. The sender also stores the data link layer address in its address table for future use.




  1. Explain the terms 10Base-2, 10BaseT, 100BaseT, 1000BaseT, 10GbE, and 10/100 Ethernet?

The original ethernet specification was a 10 Mbps data rate using baseband signaling on thick coaxial cable, called 10Base5 (or “Thicknet”), capable of running 500 meters between hubs. Following 10Base5 was 10Base2 or thinnet as we used to say. Thinnet or RG-58 coaxial cable, similar to what is used for cable TV was considerably cheaper and easier to work with, although it was limited to 185 meters between hubs. The 10Base-2 standard was often called “Cheapnet.”
When twisted pair cabling was standardized for supporting Ethernet (app. 1988) the T replaced the 2 to represent “twisted-pair”. Twisted pair is the most commonly used cable type for Ethernet. 10BaseT breaks down as 10 Mbps, baseband, and the “T” means it uses twisted pair wiring (actually unshielded twisted pair). It was the 10Base-T standard that revolutionized Ethernet, and made it the most popular type of LAN in the world.
Eventually the 10BaseT standard was improved to support Fast Ethernet or 100BaseT that breaks down as 100Mbps baseband over twisted-pair cable. This eventually was improved even further to 1000BaseT or 1 Billion BITs per second baseband. There is currently a revised standard evolving which makes Ethernet even faster. It is known as the 10GbE or 10 Billion BITs per second Ethernet. Though proven to work it has yet to reach the marketplace. But it would be astute to consider that it will be here in the near future.

Finally, 10/100Mbps Ethernet refers to the standard that can autosense which speed it needs to run at between the two speeds of 10Mbos or 100Mbps. It comes down to the type of NIC running at the individual node and the type of switch port that the node connects into. It is commonplace to run 10/100Mbps switches in LAN operating environments where there are older NICs already operating and no real business case requirements for upgrading these nodes.




  1. Explain how the two approaches to media access control work in CSMA/CA?

The two approaches are Physical Carrier Sense Method (PCSM) and Virtual Carrier Sense Method (VCSM). PCSM is based on the ability of the computers to physically listen before they transmit. After a transmission is sent the receiving computer acknowledges (ACK) the transmission by sending an ACK packet in reply. The source computer upon receipt of the ACK packet then knows it has a connection and can continue transmission to the destination computer. VCSM does not rely on physical media. A computer running this protocol first must send a Request to Transmit (RTS) packet to the AP. If all clear the AO responds with a Clear to Send (CTS) packet back to the source computer. The source computer may then begin transmission.




  1. Explain how routed backbones work.


Routed backbones move packets along the backbone based on their network layer address (i.e., layer 3 address). The most common form of routed backbone uses a bus topology (e.g., using Ethernet 100Base-T). Routed backbones can be used at the core or distribution layers.
At the core layer routed backbones are sometimes called subnetted backbones or hierarchical backbones and are most commonly used to connect different buildings within the same campus network.
At the distribution layer a routed backbone uses routers or layer 3 switches to connect a series of LANs (access layer) to a single shared media backbone network. Each of the LANs are a separate subnet. Message traffic stays within each subnet unless it specifically needs to leave the subnet to travel elsewhere on the network, in which case the network layer address (e.g., TCP/IP) is used to move the packet.


  1. Explain how bridged backbones work.

Bridged backbones move packets along the backbone based on their data link layer address (i.e., layer 2 address). The most common form also uses a bus topology. They were common in the distribution layer, but their use is declining; few organizations install bridged networks because they have major performance problems. Bridged backbones are sometimes called flat backbones.


With a bridged backbone the entire network (backbone and all connected network segments) are on the same subnet. All LANs are part of the same overall network and all must have the same data link layer protocol. This is in sharp contrast to the routed backbone in which the LANs are isolated and may be different.


  1. Explain how collapsed backbones work?

Collapsed backbone networks use a star topology with one device, usually a switch, at its center. The traditional backbone circuit and set of routers or bridges is replaced by one switch and a set of circuits to each LAN. The collapsed backbone has more cable, but fewer devices. There is no backbone cable. The “backbone” exists only in the switch, which is why this is called a collapsed backbone. The original collapsed backbone technology uses layer-2 switches and suffers some disadvantage due to the load of data link layer overhead message traffic and limitations on network segmentation. As this weakness has been recognized, collapsed backbone technology is adapting by evolving to the use of layer-3 switches to overcome these problems. The result is better performance and improved network management capabilities for collapsed backbone networks.


Collapsed backbones are probably the most common type of backbone network used in the distribution layer (i.e., within a building). Most new building backbone networks designed today use collapsed backbones. They also are making their way into the core layer as the campus backbone, but routed backbones still remain common.


  1. What are the key advantages and disadvantages among bridged, routed and collapsed backbones.







Advantages

Disadvantages

Bridged backbones

  • Since bridges tend to be less expensive than routers, they are often cheaper.

  • Bridges are usually simpler to install because the network manager does not need to worry about building many different subnets and assigning a whole variety of different subnet masks and addresses in each part of the network

  • Bridged backbones pay a penalty for the broadcast paradigm and are slower than routed backbones. Since bridged backbone and all networks connected to them are part of the same subnet, broadcast messages (e.g., address requests) must be permitted to travel everywhere in the backbone. This means, for example, that a computer in one LAN attempting to find the data link layer address of a server in the same LAN will issue a broadcast message that will travel to every computer on every LAN attached to the backbone. (In contrast, on a routed backbone such messages would never leave the LAN in which they originated.)

  • Overhead or utility messages add to the broadcast paradigm penalty. There are many different types of broadcast messages other than address requests (e.g., a printer reporting it is out of paper, a server about to be shut down). These broadcast messages quickly use up network capacity in a large bridged network. The result is slower response times for the user. In a small network, the problems are not as great, because there are fewer computers to issue such broadcast messages.

  • Since the backbone and all attached networks are considered part of the same subnet, it is more difficult to permit different individuals to manage different parts of the network (e.g., LANs); a change in one part of the network has the potential to significantly affect all other parts.

  • It is possible to run out of IP addresses if the entire network has many computers.

Routed backbones

  • Clear segmentation of parts of the network connected to the backbone as each network has a subnet address and can be managed separately.

  • Slower performance as routing takes more time than bridging or switching.

  • Management and/or software overhead costs due to need to establish subnet addressing and provide reconfiguration when computers are moved (or support dynamic addressing).

Collapsed backbones

  • Performance is improved. With the traditional backbone network, the backbone circuit was shared among many LANs; each had to take turns sending messages. With the collapsed backbone, each connection into the switch is a separate point-to-point circuit. The switch enables simultaneous access, so that several LANs can send messages to other LANs at the same time. Throughput is increased significantly, often by 200% to 600%, depending upon the number of attached LANs and the traffic pattern.

  • Since there are far fewer networking devices in the network, this reduces costs and greatly simplifies network management. All the key backbone devices are in the same physical location, and all traffic must flow through the switch. If something goes wrong or if new cabling is needed, it can all be done in one place.

  • Software reconfiguration replaces hardware reconfiguration.

  • Because data link layer addresses are used to move packets, there is more broadcast traffic flowing through the network and it is harder to isolate and separately manage the individually attached LANs. Layer 3 switches can use the network layer address, so future collapsed backbones built with layer 3 will not suffer from this problem.

  • Collapsed backbones use more cable, and the cable must be run longer distances, which often means that fiber optic cables must be used.

  • If the switch fails, so does the entire backbone network. If the reliability of the switch has the same reliability as the reliability of the routers, then there is less chance of an failure (because there are fewer devices to fail).

For most organizations, the relatively minor disadvantages of cable requirements and impacts of potential switch failure are outweighed by the benefits offered by collapsed backbones.







Download 44.54 Kb.

Share with your friends:




The database is protected by copyright ©ininet.org 2024
send message

    Main page