Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Cisco Switching Black Book - Sean Odom, Hanson Nottingham.pdf
Скачиваний:
87
Добавлен:
24.05.2014
Размер:
2.89 Mб
Скачать

Chapter 4: LAN Switch Architectures

In Depth

Knowing the internal architectures of networking devices can be a great asset when you’re working with Cisco switches. Knowing how the internal components work together, as well as how Application−Specific Integrated Circuits (ASICs) and CPUs are used, can give you an advantage in determining what Cisco device will work best at every point in the network.

The Catalyst Crescendo Architecture

When you’re looking at the architecture of the switch, ASICs are among the most important components. ASICs are very fast and relatively inexpensive silicon chips that do one or two specific tasks faster than a processor can perform those same functions. These chips have some advantages over a processor but lack functions such as filtering and advanced management functions, and they have limited support for bridging modes. ASICs make today’s switches less expensive than processor−based switches. Processor−based switches are still available, but they are expensive and limited in the number of tasks they can take on and still maintain reliable and acceptable limits of throughput.

The Set/Clear command−based Command Line Interface (CLI) switches (also known as Crescendo Interface switches) found in the Cisco Catalyst 2900G, 5000, 5500, 6000, and 6500 series of switches, give the best example of how the Broadcast and Unknown Server (BUS), ASICs, Arbiters, and logic units work inside the switch. Let’s look at Figure 4.1, which shows a diagram of the ASICs and processors found inside a Cisco 5000 series switch. We’ll examine these components and then look at several other ASICs that are for more specialized or earlier model Cisco Catalyst switches.

Figure 4.1: The architecture of the Cisco Catalyst 5000 series switch.

First, we need to look at the components involved: the ASICs, Catalyst processors, bus, and other units of logic. Let’s begin by examining each of the BUSs; then we will define the ASICs shown in Figure 4.1.

BUS

Every switch must have at least two interfaces. But what fun would just two be? Today’s switches can have hundreds of ports. The BUS connects all these interfaces—it moves frames from one interface to the other. All these frames require an arbitration process using processors, ASICs, and logic units to make sure data doesn’t slip out the wrong port or ports.

68

Single BUS vs. Crossbar Matrix

A single−BUS architecture is pretty simple: One BUS connects all the ports together. This setup creates a bandwidth problem called a blocking architecture, or what the networking industry likes to call over−subscription. Over−subscription is characterized as a condition in which the total bandwidth of all the ports on the switch is greater than the capacity of the switching fabric or backplane. As a result, data is held up at the port because the tunnel−through switch is too small. Examples of Cisco switches with a single−BUS architecture are the Cisco Catalyst 1900, 2820, 3000, and 5000 series.

A cross−bar matrix is used to solve the problems of a single BUS architecture by creating a multiple BUS architecture in which more than one BUS services the switch ports. In this architecture, the BUS can handle all the data the ports can possibly send—and more. It is sometimes referred to as a non−blocking architecture, and it requires a very sophisticated arbitration scheme.

Tip The switching fabric is the “highway” the data takes from the point of entry to the port or ports from which the data exits.

Each switch employs some kind of queuing method in order to solve blocking problems. An Ethernet interface may receive data when the port does not have access to the BUS. In this situation, the port has a buffer in which it stores the frame it receives until the BUS can process it. The frame uses queuing to determine which frame will be processed next. Let’s look at the three queuing components: input queuing, output queuing, and shared buffering.

Input Queuing

Input queuing is the simpler of the two forms of queuing. The frame is buffered into the port’s buffer until it becomes its turn to enter the bus. When the frame enters the bus, the exit port must be free to allow the frame to exit. If another frame is exiting the port, a condition called head−of−line blocking occurs: The frame is dropped because it was blocked by other data.

Output Queuing

Output queuing can be used with input queuing; it allows the frame to be buffered on the outbound port if other data is in the way. This is a way to resolve head−of−line blocking, but if a large burst of frames occurs, head−of−line blocking still can occur. The problem of large bursts can be resolved by using shared buffering. All the Cisco Catalyst switches (with the exception of the 1900 and 2820 series) use both input and output queuing.

Shared Buffering

Although there is no sure way to stop head−of−line blocking, shared buffering can be used in a switch as a safeguard. Shared buffering is a derivative of output queuing and provides each port with access to one large buffer instead of smaller, individual buffering spaces. If a frame is placed in this buffer, the frame is extracted from the shared memory buffer and forwarded. This method is used on the 1900 and 2820 series of Cisco Catalyst switches.

ASICs

The ASICs shown in Figure 4.1 are used in the Catalyst 5000 series Supervisor Engine and an Ethernet Module. Let’s take a look at each:

Encoded Address Recognition Logic (EARL) ASIC

Encoded Address Recognition Logic Plus (EARL+) ASIC

Synergy Advanced Interface and Network Termination (SAINT) ASIC

69

∙ Synergy Advanced Multipurpose Bus Arbiter (SAMBA) ASIC

EARL ASIC

The Encoded Address Recognition Logic (EARL) ASIC performs functions that are very similar to those of the Content Addressable Memory (CAM) table. Switches use this CAM to make filtering and forwarding decisions. The EARL ASIC connects directly to the data switching bus, allowing the ASIC access to all the frames that cross the switching fabric. The switch makes forwarding decisions based on the destination Media Access Control (MAC) address.

Note The CAM table contains the MAC address of the interfaces connected to the port and the time the switch last read a frame from that source port and address. The CAM table receives updated information by examining frames it receives from a segment; it then updates the table with the source MAC address from the frame.

The EARL ASIC aids in building a table containing all the information the switch has extracted from incoming frames. This information includes the source MAC address, the port of arrival, the virtual LAN (VLAN) membership of the port of arrival, and the time the frame was received. This table can contain up to 128,000 entries. Entries in the table are removed after the time to live (TTL) has expired. The default TTL at which entries are removed is 300 seconds; this time can be set from 1 to 20 minutes.

The EARL ASIC tags each frame as it arrives at the switch before the frame is buffered. This tagging includes the source port’s identity, the VLAN, and a checksum. This tagging should not be confused with the tagging used in trunking for Inter−Switch Link (ISL) or 802.1Q, discussed in Chapter 5. The tagging the EARL places in the frame is removed before the frame exits the switch. The EARL ASIC’s placement is shown in Figure 4.2.

Figure 4.2: EARL ASIC placement on the Crescendo architecture.

EARL+ ASIC

The Encoded Address Recognition Logic Plus (EARL+) ASIC allows the EARL to support Token Ring line modules. The EARL+ ASIC is an enhancement to the EARL ASIC and is used on the Supervisor Engine III Module.

SAINT ASIC

The Synergy Advanced Interface and Network Termination (SAINT) ASIC allows a switch interface to support both half−duplex and full−duplex Ethernet. This ASIC has a second responsibility to handle frame encapsulation and de−encapsulation, and gathering statistics for trunked ports.

SAMBA ASIC

The Synergy Advanced Multipurpose Bus Arbiter (SAMBA) ASIC and the EARL ASIC work in tandem to let ports access the bus, thus allowing frames to be forwarded correctly. Both the Supervisor Engine and the installed line modules utilize this ASIC; it can support up to 13 separate line modules.

70

This ASIC operates in either master or slave mode. In master mode, the ASIC allows ports access to the bus based on a priority level of normal, high, or critical. In slave mode, each port must post a request to each SAMBA ASIC, negotiate local port decisions, and arbitrate requests with the Supervisor Engine’s SAMBA ASIC.

The Crescendo Processors

Although we have ASICs to do some of the hard work of the processors, processors still must be involved to handle the more dynamic administrative items. They carry the intelligence behind the frame−switching process. Inside the Crescendo Interface Internetwork Operating System (IOS) switches, the processors connect to a bus; the bus in turn connects to other ASICs and processors inside the switch. In the following sections, I will examine the processors listed here and their assigned functions. You will find these processors in the Crescendo Interface Catalyst switches:

Line Module Communication Processor (LCP)

Master Communication Processor (MCP)

Network Management Processor (NMP)

LCP

The Line Module Communication Processor (LCP) can be found on each line module in the switch. This ASIC’s responsibility is to provide communications for access to the Master Communication Processor (MCP) located on the Supervisor Engine.

The LCP automatically boots from read−only memory (ROM) and is an 8051 processor. Immediately upon boot up, the ASIC forwards an information package called a Resetack to the MCP. Resetack includes information regarding the switch’s boot diagnostics and module information. This information is then forwarded from the MCP to the Network Management Processor (NMP).

MCP

The Master Communication Processor (MCP), which is sometimes called the Management Control Processor, uses a serial management bus to communicate between the NMP on the Supervisor Engine module and the LCP on the individual line cards located in the switch. The MCP also has a secondary job: to test and check the configuration of the local ports, control of local ports, downloading of runtime code, and performing continuous port diagnostics. This ASIC handles the diagnostics and obtains the usage statistics of the on−board memory, ASICs, Local Target Logic (LTL), and Color Blocking Logic (CBL).

NMP

The Network Management Processor (NMP) is used to control the system hardware, configuration, switch management, the Spanning−Tree Protocol (STP) (discussed in Chapter 10), and diagnostic functions.

Crescendo Logic Units

Logic units provide logic−based forwarding by VLAN, MAC address, or port assignment. The Catalyst Crescendo Interface switches contain the following logic units:

Arbiter (ARB)

Local Target Logic (LTL)

Color Blocking Logic (CBL)

Remote Network Monitoring (RMN)

71

Соседние файлы в предмете Программирование