Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Cisco Switching Black Book - Sean Odom, Hanson Nottingham.pdf
Скачиваний:
87
Добавлен:
24.05.2014
Размер:
2.89 Mб
Скачать

Switched Forwarding

Switches route data based on the destination MAC address contained in the frame’s header. This approach allows switches to replace Layer 2 devices such as hubs and bridges.

After a frame is received and the MAC address is read, the switch forwards data based on the switching mode the switch is using. This strategy tends to create very low latency times and very high forwarding rates. Switches use three switching modes to forward information through the switching fabric:

Store−and−forward

Cut−through

FragmentFree

Tip

Switching fabric is the route data takes to get from the input port on the switch to the output port

 

on the switch. The data may pass through wires, processors, buffers, ASICs, and many other

 

components.

Store−and−Forward Switching

Pulls the entire packet received into its onboard buffers, reads the entire packet, and calculates its cyclic redundancy check (CRC). It then determines if the packet is good or bad. If the CRC calculated on the packet matches the CRC calculated by the switch, the destination address is read and the packet is forwarded out the correct port on the switch. If the CRC does not match the packet, the packet is discarded. Because this type of switching waits for the entire packet before forwarding, latency times can become quite high, which can result in some delay of network traffic.

Cut−Through Switching

Sometimes referred to as realtime switching or FastForward switching, cut−through switching was developed to reduce the latency involved in processing frames as they arrive at the switch and are forwarded on to the destination port. The switch begins by pulling the frame header into its network interface card buffer. As soon as the destination MAC address is known (usually within the first 13 bytes), the switch forwards the frame out the correct port.

This type of switching reduces latency inside the switch; however, if the frame is corrupt because of a late collision or wire interference, the switch will still forward the bad frame. The destination receives the bad frame, checks its CRC, and discards it, forcing the source to resend the frame. This process will certainly waste bandwidth; and if it occurs too often, major impacts can occur on the network.

In addition, cut−through switching is limited by its inability to bridge different media speeds. In particular, some network protocols (including NetWare 4.1 and some Internet Protocol [IP] networks) use windowing technology, in which multiple frames may be sent without a response. In this situation, the latency across a switch is much less noticeable, so the on−the−fly switch loses its main competitive edge. In addition, the lack of error checking poses a problem for large networks. That said, there is still a place for the fast cut−through switch for smaller parts of large networks.

FragmentFree Switching

Also known as runtless switching, FragmentFree switching was developed to solve the late−collision problem. These switches perform a modified version of cut−through switching. Because most corruption in a packet occurs within the first 64 bytes, the switch looks at the entire first 64 bytes to get the destination MAC address, instead of just reading the first 13 bytes. The minimum valid size for an Ethernet frame is 64 bytes. By verifying the first 64 bytes of the frame, the switch then determines if the frame is good or if a collision occurred during transit.

19

Combining Switching Methods

To resolve the problems associated with the switching methods discussed so far, a new method was developed. Some switches, such as the Cisco Catalyst 1900, 2820, and 3000 series, begin with either cut−through or FragmentFree switching. Then, as frames are received and forwarded, the switch also checks the frame’s CRC. Although the CRC may not match the frame itself, the frame is still forwarded before the CRC check and after the MAC address is reached. The switch performs this task so that if too many bad frames are forwarded, the switch can take a proactive role, changing from cut−through mode to store−and−forward mode. This method, in addition to the development of high−speed processors, has reduced many of the problems associated with switching.

Only the Catalyst 1900, 2820, and 3000 series switches support cut−through and FragmentFree switching. You might ponder the reasoning behind the faster Catalyst series switches not supporting this seemingly faster method of switching. Well, store−and−forward switching is not necessarily slower than cut−through switching—when switches were first introduced, the two modes were quite different. With better processors and integrated−circuit technology, store−and−forward switching can perform at the physical wire limitations. This method allows the end user to see no difference in the switching methods.

Switched Network Bottlenecks

This section will take you step by step through how bottlenecks affect performance, some of the causes of bottlenecks, and things to watch out for when designing your network. A bottleneck is a point in the network at which data slows due to collisions and too much traffic directed to one resource node (such as a server). In these examples, I will use fairly small, simple networks so that you will get the basic strategies that you can apply to larger, more complex networks.

Let’s start small and slowly increase the network size. We’ll take a look at a simple way of understanding how switching technology increases the speed and efficiency of your network. Bear in mind, however, that increasing the speed of your physical network increases the throughput to your resource nodes and doesn’t always increase the speed of your network. This increase in traffic to your resource nodes may create a bottleneck.

Figure 1.6 shows a network that has been upgraded to 100Mbps links to and from the switch for all the nodes. Because all the devices can send data at 100Mbps or wire−speed to and from the switch, a link that receives data from multiple nodes will need to be upgraded to a faster link than all the other nodes in order to process and fulfill the data requests without creating a bottleneck. However, because all the nodes—including the file servers—are sending data at 100Mbps, the link between the file servers that is the target for the data transfers for all the devices becomes a bottleneck in the network.

Figure 1.6: A switched network with only two servers. Notice that the sheer number of clients sending data to the servers can overwhelm the cable and slow the data traffic.

20

Many types of physical media topologies can be applied to this concept. In this demonstration, we will utilize Ethernet 100BaseT. Ethernet 10BaseT and 100BaseT are most commonly found in the networks of today.

We’ll make an upgrade to the network and alleviate our bottleneck on the physical link from the switch to each resource node or server. By upgrading this particular link to a Gigabit Ethernet link, as shown in Figure 1.7, you can successfully eliminate this bottleneck.

Figure 1.7: The addition of a Gigabit Ethernet link on the physical link between the switch and the server.

It would be nice if all network bottleneck problems were so easy to solve. Let’s take a look at a more complex model. In this situation, the demand nodes are connected to one switch and the resource nodes are connected to another switch. As you add additional users to switch A, you’ll find out where our bottleneck is. As you can see from Figure 1.8, the bottleneck is now on the trunk link between the two switches. Even if all the switches have a VLAN assigned to each port, a trunk link without VTP pruning enabled will send all the VLANs to the next switch.

Figure 1.8: : A new bottleneck on the trunk link between the two switches.

To resolve this issue, you could implement the same solution as the previous example and upgrade the trunk between the two switches to a Gigabit Ethernet. Doing so would eliminate the bottleneck. You want to put switches in place whose throughput is never blocked by the number of ports. This solution is referred to as using non−blocking switches.

Non−Blocking Switch vs. Blocking Switch

21

We call a switch a blocking switch when the switch bus or components cannot handle the theoretical maximum throughput of all the input ports combined. There is a lot of debate over whether every switch should be designed as a non−blocking switch; but for now this situation is only a dream, considering the current pricing of non−blocking switches.

Let’s get even more complicated and introduce another solution by implementing two physical links between the two switches and using full−duplexing technology. Full duplex essentially means that you have two physical wires from each port—data is sent on one link and received on another. This setup not only virtually guarantees a collision−free connection, but also can increase your network traffic to almost 100 percent on each link.

You now have 200 percent throughput by utilizing both links. If you had 10Mbps on the wire at half duplex, by implementing full duplex you now have 20Mbps flowing through the wires. The same thing goes with a 100BaseT network: Instead of 100Mbps, you now have a 200Mbps link.

Tip If the interfaces on your resource nodes can implement full duplex, it can also be a secondary solution for your servers.

Almost every Cisco switch has an acceptable throughput level and will work well in its own layer of the Cisco hierarchical switching model or its designed specification. Implementing VLANs has become a popular solution for breaking down a segment into smaller collision domains.

Internal Route Processor vs. External Route Processor

Routing between VLANs has been a challenging problem to overcome. In order to route between VLANs, you must use a Layer 3 route processor or router. There are two different types of route processors: an external route processor and an internal route processor. An external route processor uses an external router to route data from one VLAN to another VLAN. An internal route processor uses internal modules and cards located on the same device to implement the routing between VLANs.

Now that you have a pretty good idea how a network should be designed and how to monitor and control bottlenecks, let’s take a look at the general traffic rule and how it has changed over time.

The Rule of the Network Road

Network administrators and designers have traditionally strived to design networks using the 80/20 rule. Using this rule, a network designer would try to design a network in which 80 percent of the traffic stayed on local segments and 20 percent of the traffic went on the network backbone.

This was an effective design during the early days of networking, when the majority of LANs were departmental and most traffic was destined for data that resided on the local servers. However, it is not a good design in today’s environment, where the majority of traffic is destined for enterprise servers or the Internet.

A switch’s ability to create multiple data paths and provide swift, low−latency connections allows network administrators to permit up to 80 percent of the traffic on the backbone without causing a massive overload of the network. This ability allows for the introduction of many bandwidth−intensive uses, such as network video, video conferencing, and voice communications.

Multimedia and video applications can demand as much as 1.5Mbps or more of continuous bandwidth. In a typical environment, users can rarely obtain this bandwidth if they share an average 10Mbps network with dozens of other people. The video will also look jerky if the data rate is not sustained. In order to support this application, a means of providing greater throughput is needed. The ability of switches to provide dedicated bandwidth at wire−speed meets this need.

22

Соседние файлы в предмете Программирование