Free download. Book file PDF easily for everyone and every device. You can download and read online CCNP: switching study guide file PDF Book only if you are registered here. And also you can download or read online all Book PDF file that related with CCNP: switching study guide book. Happy reading CCNP: switching study guide Bookeveryone. Download file Free Book PDF CCNP: switching study guide at Complete PDF Library. This Book have some digital formats such us :paperbook, ebook, kindle, epub, fb2 and another formats. Here is The CompletePDF Book Library. It's free to register here to get Book file PDF CCNP: switching study guide Pocket Guide.
Copyright:

Our team of experts has composed this Cisco exam preparation guide to provide the overview about Cisco Implementing Cisco IP Switched Networks exam, study material, sample questions, practice exam and ways to interpret the exam objectives to help you assess your readiness for the Cisco SWITCH exam by identifying prerequisite areas of knowledge.

CCNP Switching Study Guide : Exam - Version details - Trove

We recommend you to refer the simulation questions and practice test listed in this guide to determine what type of questions will be asked and the level of difficulty that could be tested in the Cisco CCNP Routing and Switching certification exam. Live Testimonials. Skip to main content Skip to search. Login links. Primary menu. Secondary menu. Cisco Certification Exam Syllabus. Cisco Exam Overview: Exam Name.

Encash Benefits of Cisco Certification. If the duplex is mismatched, collisions will occur. These issues can be difficult to troubleshoot, as the network connection will still function, but will be excruciatingly slow. When autonegotiation was first developed, manufacturers did not always adhere to the same standard. This resulted in frequent mismatch issues, and a sentiment of distrust towards autonegotiation. Though modern network hardware has alleviated most of the incompatibility, many administrators are still skeptical of autonegotiation and choose to hardcode all connections.

Another common practice is to hardcode server and datacenter connections, but to allow user devices to autonegotiate. Gigabit Ethernet, covered in the next section, provided several enhancements to autonegotiation, such as hardware flow control. Most manufacturers recommend autonegotiation on Gigabit Ethernet interfaces as a best practice.

Gigabit over twisted-pair uses all four pairs, and requires Category 5e cable for reliable performance. Gigabit Ethernet is backwards-compatible with the original Ethernet and Fast Ethernet. Gigabit Ethernet supports both half-duplex or fullduplex operation. Full-duplex Gigabit Ethernet effectively provides Mbps of throughput. Each side of the cable is terminated using an RJ45 connector, which has eight pins. When the connector is crimped onto the cable, these pins make contact with each wire.

The wires themselves are assigned a color to distinguish them. For example, both Ethernet and Fast Ethernet use two wires to transmit, and two wires to receive data, while the other four pins remain unused. For communication to occur, transmit pins must connect to the receive pins of the remote host. This does not occur in a straight-through configuration: The pins must be crossed-over for communication to be successful: The crossover can be controlled either by the cable, or an intermediary device, such as a hub or switch.

The hub or switch provides the crossover or MDIX function to connect transmit pins to receive pins. The pinout on each end of a straight-through cable must be identical. However, when connecting a host directly to another host MDI to MDI , the crossover function must be provided by a crossover cable. A crossover cable is often required to uplink a hub to another hub, or to uplink a switch to another switch.

Modern devices can now automatically detect whether the crossover function is required, negating the need for a crossover cable. This functionality is referred to as Auto-MDIX, and is now standard with Gigabit Ethernet, which uses all eight wires to both transmit and receive.

AutoMDIX requires that autonegotiation be enabled. A rollover cable is often referred to as a console cable, and its sheathing is usually flat and light-blue in color. To create a rollover cable, the pins are completely reversed on one end of the cable: Pin Connector 1 1 2 3 4 5 6 7 8 White Orange Orange White Green Blue White Blue Green White Brown Brown Connector 2 Pin Brown White Brown Green White Blue Blue White Green Orange White Orange 8 7 6 5 4 3 2 1 Rollover cables can be used to configure Cisco routers, switches, and firewalls.

This is especially useful in areas where installing separate power might be expensive or difficult. Power can be sent across either the unused pairs in a cable, or the data transmission pairs, which is referred to as phantom power. Gigabit Ethernet requires the phantom power method, as it uses all eight wires in a twisted-pair cable. PoE can be supplied using an external power injector, though each powered device requires a separate power injector. More commonly, an The power supplies in the switch must be large enough to support both the switch itself, and the devices it is powering.

Routers Layered Communication Network communication models are generally organized into layers. The OSI model specifically consists of seven layers, with each layer representing a specific networking function. These functions are controlled by protocols, which govern end-to-end communication between devices. As data is passed from the user application down the virtual layers of the OSI model, each of the lower layers adds a header and sometimes a trailer containing protocol information specific to that layer.

These headers are called Protocol Data Units PDUs , and the process of adding these headers is referred to as encapsulation. For example, switches are generally identified as Layer-2 devices, as switches process information stored in the Data-Link header of a frame such as MAC addresses in Ethernet.

Similarly, routers are identified as Layer-3 devices, as routers process logical addressing information in the Network header of a packet such as IP addresses. However, the strict definitions of the terms switch and router have blurred over time, which can result in confusion. For example, the term switch can now refer to devices that operate at layers higher than Layer This will be explained in greater detail in this guide. Hubs can also be referred to as repeaters. Hubs provide no intelligent forwarding whatsoever.

Hubs are incapable of processing either Layer-2 or Layer-3 information, and thus cannot make decisions based on hardware or logical addressing. Thus, hubs will always forward every frame out every port, excluding the port originating the frame. Hubs do not differentiate between frame types, and thus will always forward unicasts, multicasts, and broadcasts out every port but the originating port.

Ethernet hubs operate at half-duplex, which allows a device to either transmit or receive data, but not simultaneously. Host devices monitor the physical link, and will only transmit a frame if the link is idle. However, if two devices transmit a frame simultaneously, a collision will occur. If a collision is detected, the hub will discard the frames and signal the host devices. Both devices will wait a random amount of time before resending their respective frames. Remember, if any two devices connected to a hub send a frame simultaneously, a collision will occur.

Thus, all ports on a hub belong to the same collision domain. A collision domain is simply defined as any physical segment where a collision can occur. Multiple hubs that are uplinked together still all belong to one collision domain. Increasing the number of host devices in a single collision domain will increase the number of collisions, which can significantly degrade performance.

Hubs also belong to only one broadcast domain — a hub will forward both broadcasts and multicasts out every port but the originating port. A broadcast domain is a logical segmentation of a network, dictating how far a broadcast or multicast frame can propagate. Only a Layer-3 device, such as a router, can separate broadcast domains. Layer-2 forwarding was originally referred to as bridging. Bridging is a largely deprecated term mostly for marketing purposes , and Layer-2 forwarding is now commonly referred to as switching.

There are some subtle technological differences between bridging and switching. Switches usually have a higher port-density, and can perform forwarding decisions at wire speed, due to specialized hardware circuits called ASICs Application-Specific Integrated Circuits. Otherwise, bridges and switches are nearly identical in function.

Ethernet switches build MAC-address tables through a dynamic learning process. A switch behaves much like a hub when first powered on. The switch will flood every frame, including unicasts, out every port but the originating port. Switches always learn from the source MAC address. A switch is in a perpetual state of learning. However, as the MAC-address table becomes populated, the flooding of frames will decrease, allowing the switch to perform more efficient forwarding decisions.

Each individual port on a switch belongs to its own collision domain. Thus, switches create more collision domains, which results in fewer collisions. Like hubs though, switches belong to only one broadcast domain.

A Layer2 switch will forward both broadcasts and multicasts out every port but the originating port. Only Layer-3 devices separate broadcast domains. Because of this, Layer-2 switches are poorly suited for large, scalable networks. The Layer-2 header provides no mechanism to differentiate one network from another, only one host from another. This poses significant difficulties. If only hardware addressing existed, all devices would technically be on the same network. Modern internetworks like the Internet could not exist, as it would be impossible to separate my network from your network.

Imagine if the entire Internet existed purely as a Layer-2 switched environment. Switches, as a rule, will forward a broadcast out every port. Even with a conservative estimate of a billion devices on the Internet, the resulting broadcast storms would be devastating. The Internet would simply collapse. Both hubs and switches are susceptible to switching loops, which result in destructive broadcast storms. STP is covered in great detail in another guide.

At one time, switches were more expensive and introduced more latency due to processing overhead than hubs, but this is no longer the case. Each method copies all or part of the frame into memory, providing different levels of latency and reliability. Latency is delay - less latency results in quicker forwarding. The Store-and-Forward method copies the entire frame into memory, and performs a Cycle Redundancy Check CRC to completely ensure the integrity of the frame. However, this level of error-checking introduces the highest latency of any of the switching methods. This is generally the first 6 bytes following the preamble.

This method allows frames to be transferred at wire speed, and has the least latency of any of the three methods. No error checking is attempted when using the cut-through method.

CCNP Switching Study Guide

The Fragment-Free Modified Cut-Through method copies only the first 64 bytes of a frame for error-checking purposes. Most collisions or corruption occur in the first 64 bytes of a frame. Fragment-Free represents a compromise between reliability store-and-forward and speed cut-through. It is possible to have host routes, but this is less common.

Each individual interface on a router belongs to its own collision domain. Thus, like switches, routers create more collision domains, which results in fewer collisions. Unlike Layer-2 switches, Layer-3 routers also separate broadcast domains. As a rule, a router will never forward broadcasts from one network to another network unless, of course, you explicitly configure it to.

Multicast is covered in great detail in another guide. Traditionally, a router was required to copy each individual packet to its buffers, and perform a route-table lookup.

Each packet consumed CPU cycles as it was forwarded by the router, resulting in latency. Thus, routing was generally considered slower than switching. It is now possible for routers to cache network-layer flows in hardware, greatly reducing latency. This has blurred the line between routing and switching, from both a technological and marketing standpoint. Caching network flows is covered in greater detail shortly. Broadcast Domain Example Consider the above diagram. By default, a switch will forward both broadcasts and multicasts out every port but the originating port.

VLANs are covered in extensive detail in another guide. VLAN tags are inserted into the Layer-2 header. Many older modular switches support Layer-3 route processors — this alone does not qualify as Layer-3 switching. Layer-2 and Layer-3 processors can act independently within a single switch chassis, with each packet requiring a route-table lookup on the route processor.

For the first packet of a particular traffic flow, the Layer-3 switch will perform a standard route-table lookup. This flow is then cached in hardware — which preserves required routing information, such as the destination network and the MAC address of the corresponding next-hop. Subsequent packets of that flow will bypass the route-table lookup, and will be forwarded based on the cached information, reducing latency.

This concept is known as route once, switch many. The switch will then cache that IP traffic flow, and subsequent packets in that flow will be switched in hardware. Both devices will then wait a random amount of time before resending their respective frames, to reduce the likelihood of another collision. This is controlled by a backoff timer process.

This is accomplished using a consistent slot time, the time required to send a specific amount of data from one end of the network and then back, measured in bits. A host must continue to transmit a frame for a minimum of the slot time. In a properly configured environment, a collision should always occur within this slot time, as enough time has elapsed for the frame to have reached the far end of the network and back, and thus all devices should be aware of the transmission.

The slot time effectively limits the physical length of the network — if a network segment is too long, a host may not detect a collision within the slot time period. A collision that occurs after the slot time is referred to as a late collision. For 10 and Mbps Ethernet, the slot time was defined as bits, or 64 bytes.

Note that this is the equivalent of the minimum Ethernet frame size of 64 bytes. The slot time actually defines this minimum. For Gigabit Ethernet, the slot time was defined as bits. This effectively doubles the throughput of a network interface. Collisions should never occur on a functional fullduplex link. Greater distances are supported when using full-duplex over half-duplex.

Full-duplex is only supported on a point-to-point connection between two devices. Thus, a bus topology using coax cable does not support full-duplex. Only a connection between two hosts or between a host and a switch supports full-duplex. A host connected to a hub is limited to half-duplex. Both hubs and half-duplex communication are mostly deprecated in modern networks. Categories of Ethernet The original These revisions or amendments are identified by the letter appended to the standard, such as Ethernet communication is baseband, which dedicates the entire capacity of the medium to one signal or channel.

In broadband, multiple signals or channels can share the same link, through the use of modulation usually frequency modulation. However, Ethernet traditionally referred to the original Ethernet supports coax, twisted-pair, and fiber cabling. Ethernet over twisted-pair uses two of the four pairs. Remember, only a connection between two hosts or between a host and a switch support full-duplex. The maximum distance of an Ethernet segment can be extended through the use of a repeater. A hub or a switch can also serve as a repeater. Fast Ethernet supports both twisted-pair copper and fiber cabling, and supports both half-duplex and full-duplex.

Fast Ethernet also introduced the ability to autonegotiate both the speed and duplex of an interface. Autonegotiation will attempt to use the fastest speed available, and will attempt to use full-duplex if both devices support it. Speed and duplex can also be hardcoded, preventing negotiation. The configuration must be consistent on both sides of the connection. Either both sides must be configured to autonegotiate, or both sides must be hardcoded with identical settings.

Otherwise a duplex mismatch error can occur. If the duplex is mismatched, collisions will occur. These issues can be difficult to troubleshoot, as the network connection will still function, but will be excruciatingly slow. When autonegotiation was first developed, manufacturers did not always adhere to the same standard. This resulted in frequent mismatch issues, and a sentiment of distrust towards autonegotiation.

Cisco 300-115 Certification Exam Syllabus

Though modern network hardware has alleviated most of the incompatibility, many administrators are still skeptical of autonegotiation and choose to hardcode all connections. Another common practice is to hardcode server and datacenter connections, but to allow user devices to autonegotiate. Gigabit Ethernet, covered in the next section, provided several enhancements to autonegotiation, such as hardware flow control.

Most manufacturers recommend autonegotiation on Gigabit Ethernet interfaces as a best practice. Gigabit over twisted-pair uses all four pairs, and requires Category 5e cable for reliable performance. Gigabit Ethernet is backwards-compatible with the original Ethernet and Fast Ethernet. Gigabit Ethernet supports both half-duplex or fullduplex operation. Full-duplex Gigabit Ethernet effectively provides Mbps of throughput. Each side of the cable is terminated using an RJ45 connector, which has eight pins. When the connector is crimped onto the cable, these pins make contact with each wire.

The wires themselves are assigned a color to distinguish them. For example, both Ethernet and Fast Ethernet use two wires to transmit, and two wires to receive data, while the other four pins remain unused. For communication to occur, transmit pins must connect to the receive pins of the remote host.

This does not occur in a straight-through configuration: The pins must be crossed-over for communication to be successful: The crossover can be controlled either by the cable, or an intermediary device, such as a hub or switch. The hub or switch provides the crossover or MDIX function to connect transmit pins to receive pins. The pinout on each end of a straight-through cable must be identical. However, when connecting a host directly to another host MDI to MDI , the crossover function must be provided by a crossover cable.

A crossover cable is often required to uplink a hub to another hub, or to uplink a switch to another switch. Modern devices can now automatically detect whether the crossover function is required, negating the need for a crossover cable. This functionality is referred to as Auto-MDIX, and is now standard with Gigabit Ethernet, which uses all eight wires to both transmit and receive. AutoMDIX requires that autonegotiation be enabled. A rollover cable is often referred to as a console cable, and its sheathing is usually flat and light-blue in color.

To create a rollover cable, the pins are completely reversed on one end of the cable: Pin Connector 1 1 2 3 4 5 6 7 8 White Orange Orange White Green Blue White Blue Green White Brown Brown Connector 2 Pin Brown White Brown Green White Blue Blue White Green Orange White Orange 8 7 6 5 4 3 2 1 Rollover cables can be used to configure Cisco routers, switches, and firewalls. This is especially useful in areas where installing separate power might be expensive or difficult. Power can be sent across either the unused pairs in a cable, or the data transmission pairs, which is referred to as phantom power.

Gigabit Ethernet requires the phantom power method, as it uses all eight wires in a twisted-pair cable. PoE can be supplied using an external power injector, though each powered device requires a separate power injector. More commonly, an The power supplies in the switch must be large enough to support both the switch itself, and the devices it is powering. Routers Layered Communication Network communication models are generally organized into layers.

The OSI model specifically consists of seven layers, with each layer representing a specific networking function. These functions are controlled by protocols, which govern end-to-end communication between devices. As data is passed from the user application down the virtual layers of the OSI model, each of the lower layers adds a header and sometimes a trailer containing protocol information specific to that layer.

These headers are called Protocol Data Units PDUs , and the process of adding these headers is referred to as encapsulation. For example, switches are generally identified as Layer-2 devices, as switches process information stored in the Data-Link header of a frame such as MAC addresses in Ethernet. Similarly, routers are identified as Layer-3 devices, as routers process logical addressing information in the Network header of a packet such as IP addresses. However, the strict definitions of the terms switch and router have blurred over time, which can result in confusion.

For example, the term switch can now refer to devices that operate at layers higher than Layer This will be explained in greater detail in this guide.

Hubs can also be referred to as repeaters. Hubs provide no intelligent forwarding whatsoever. Hubs are incapable of processing either Layer-2 or Layer-3 information, and thus cannot make decisions based on hardware or logical addressing. Thus, hubs will always forward every frame out every port, excluding the port originating the frame.

Hubs do not differentiate between frame types, and thus will always forward unicasts, multicasts, and broadcasts out every port but the originating port. Ethernet hubs operate at half-duplex, which allows a device to either transmit or receive data, but not simultaneously. Host devices monitor the physical link, and will only transmit a frame if the link is idle. However, if two devices transmit a frame simultaneously, a collision will occur.

If a collision is detected, the hub will discard the frames and signal the host devices. Both devices will wait a random amount of time before resending their respective frames. Remember, if any two devices connected to a hub send a frame simultaneously, a collision will occur. Thus, all ports on a hub belong to the same collision domain. A collision domain is simply defined as any physical segment where a collision can occur.

Multiple hubs that are uplinked together still all belong to one collision domain. Increasing the number of host devices in a single collision domain will increase the number of collisions, which can significantly degrade performance. Hubs also belong to only one broadcast domain — a hub will forward both broadcasts and multicasts out every port but the originating port. A broadcast domain is a logical segmentation of a network, dictating how far a broadcast or multicast frame can propagate.

Only a Layer-3 device, such as a router, can separate broadcast domains. Layer-2 forwarding was originally referred to as bridging. Bridging is a largely deprecated term mostly for marketing purposes , and Layer-2 forwarding is now commonly referred to as switching. There are some subtle technological differences between bridging and switching. Switches usually have a higher port-density, and can perform forwarding decisions at wire speed, due to specialized hardware circuits called ASICs Application-Specific Integrated Circuits. Otherwise, bridges and switches are nearly identical in function.

Ethernet switches build MAC-address tables through a dynamic learning process. A switch behaves much like a hub when first powered on. The switch will flood every frame, including unicasts, out every port but the originating port. Switches always learn from the source MAC address. A switch is in a perpetual state of learning. However, as the MAC-address table becomes populated, the flooding of frames will decrease, allowing the switch to perform more efficient forwarding decisions.

Each individual port on a switch belongs to its own collision domain. Thus, switches create more collision domains, which results in fewer collisions.

Best Value Purchase

Like hubs though, switches belong to only one broadcast domain. A Layer2 switch will forward both broadcasts and multicasts out every port but the originating port. Only Layer-3 devices separate broadcast domains. Because of this, Layer-2 switches are poorly suited for large, scalable networks. The Layer-2 header provides no mechanism to differentiate one network from another, only one host from another.

This poses significant difficulties. If only hardware addressing existed, all devices would technically be on the same network. Modern internetworks like the Internet could not exist, as it would be impossible to separate my network from your network. Imagine if the entire Internet existed purely as a Layer-2 switched environment. Switches, as a rule, will forward a broadcast out every port. Even with a conservative estimate of a billion devices on the Internet, the resulting broadcast storms would be devastating.

The Internet would simply collapse.