Добавил:
Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Cisco Switching Black Book - Sean Odom, Hanson Nottingham.pdf
Скачиваний:
87
Добавлен:
24.05.2014
Размер:
2.89 Mб
Скачать

For security reasons, you should change the default password and add an enable password on the Crescendo and IOS CLI−based interface switches. In the next stage of the configuration, you should assign an IP address, subnet mask, and default route to the route processor for routing and management purposes.

Once you have finished the preceding basic steps, you can connect the switch to the rest of the local network. You can use many different types of physical media, such as Ethernet, Fast Ethernet, and Gigabit Ethernet.

Switches have two types of connections: the connection to the switch console where you can initially configure the switch or monitor the network, and the connection to an Ethernet port on the switch.

Different classifications of switches permit the switches to be placed in different layers of the network architecture. Cisco prefers to use a hierarchal campus model for switches, to break down the complexity of the network.

Campus Hierarchical Switching Model

Cisco defines a campus as a group of buildings connected into an enterprise network of multiple LANs. A campus has a fixed geographic location and is owned and controlled by the same organization.

The campus hierarchical switching model, sometimes referred to as Cisco’s hierarchical internetworking model, has been widely deployed in switching environments. However, telephone companies have been adopting this system in their own switching environments—particularly recently, as they branch out as providers of Internet, Digital Subscriber Line (DSL), and other digital technologies. This model provides the maximum bandwidth to the users of the network while also providing Quality of Service (QoS) features, such as queuing.

Queuing

Queuing is a way of withholding bandwidth from one data process to provide a guarantee of bandwidth for another. You can define queuing priorities for different traffic types; these priorities can be used in many networking environments that require multiple high−priority queues, including Internet Protocol (IP), Internetwork Packet Exchange (IPX), and System Network Architecture (SNA) environments. Queues are provided dynamically, which means that traffic can filter through the switch or router without congestion—bandwidth is not withheld from use by queues.

Queuing offers a number of different types of configurations and ways to base traffic to be queued: Cisco comes out with new solutions frequently. Here are a few of the most frequently used and recommended ways to control traffic:

First in, first out (FIFO)—The queuing method most network administrators are familiar with. It allows for buffering control, storing data traffic in buffers and then releasing it slowly when congestion occurs on the network. This type of queuing works well on LANs where a switch or router is the demarcation point for a high−speed link and a slower link.

Priority queuing (PQ)—Provides absolute preferential treatment, giving an identified type of data traffic higher priority than other traffic. This method ensures that critical data traffic traversing various links gets priority treatment over other types of data traffic. PQ also provides a faster response time than other methods of queuing. Although you can enable priority output queuing for any interface, it is best used for low−bandwidth, congested serial interfaces. Remember that PQ introduces extra overhead, which is acceptable for slow interfaces but may not be acceptable for high−speed interfaces.

Custom queuing (CQ)—Based on a packet or application identifier. This type of queuing is different from PQ in that it assigns a varying window of bandwidth to each source of incoming bandwidth, assigning each window to a queue. The switch then services each queue in a round−robin fashion.

31

Weighted fair queuing (WFQ)—Allows for multiple queues so that no one queue can starve another of all its bandwidth. WFQ is enabled by default on all serial interfaces that run at or below 2Mbps, except for those interfaces with Link Access Procedure, Balanced (LAPB), X.25, or Synchronous Data Link Control (SDLC) encapsulations. Most networks fail when their design creates unstable network links, hardware failures, or routing loops. When a failure occurs in such a network, and then the network does not converge in time to prevent a major problem for network processes or users, redundancy must be built in.

When designing a network using the Cisco campus hierarchical switching model, you create redundancy; doing so aids in the case of a network failure by providing logical points to aggregate and summarize network traffic. This setup prevents a failure in one part of the network from affecting the entire enterprise network. This model divides the network into three distinct layers:

Access layer—The first layer, which is the first point of access for the end user interface. This layer passes traffic from the end user interface to the rest of the network. Security at this layer is port−based and provides verification of an authentic MAC address, local device security, and access lists.

Distribution layerThe second layer, which serves to combine the traffic of the Access layer, summarize traffic, and combine routes. This layer also processes data traffic and applies security and queuing policies, allowing data traffic to be filtered and providing a guarantee of bandwidth availability for certain traffic.

Core layer—Reads headers and forwards traffic as quickly as possible through the network. This is its only function. This layer needs to have high reliability and availability because any losses at this layer can greatly affect the rest of the network.

The Cisco campus hierarchical switching model is depicted in Figure 2.1.

Figure 2.1: The Cisco campus hierarchical switching model.

Access Layer

The Access layer provides some important functionality, such as shared bandwidth, switched bandwidth, Media Access Control (MAC) layer filtering, and microsegmentation. Two goals of this layer are to pass traffic to the network for valid network users and to filter traffic that is passed along.

The Access layer switch connects the physical wire from the end user interface, thereby providing the means to connect to the devices located on the Distribution layer. It provides connections to both the local LAN and remote devices. The Access layer is the entry point to the network. This layer makes security and policy decisions and becomes the logical termination point for virtual private networks (VPNs).

Distribution Layer

The Distribution layer is the demarcation point between the Access and Core layers. This layer terminates network traffic that originates in the Access layer and then summarizes the traffic before passing it along to the highest Core layer. The Distribution layer also provides policy−based network connectivity, such as queuing and data termination.

32

Соседние файлы в предмете Программирование