Follow me on Twitter:

Are Cisco Flex Links the End of STP?

Posted: June 18th, 2010 | Author: | Filed under: Networking | Tags: , , , | 2 Comments »

Cisco Flex Links gives network operators a simple, reliable, and more scalable method of layer 2 redundancy. The Spanning Tree Protocol (STP) is not destined for the scrap bin, but it will certainly fall out of favor with many enterprise networks.

Flex Links are a pair of layer 2 interfaces configured to act as a backup of each other. Configuring Flex Links is very simple, but it’s a manual process. Spanning tree can configure itself if you just enable it, albeit likely a sub-optimal configuration, but a working one nonetheless. Flex Links, on the other hand, require manual setup and layout of your layer 2 network. If you don’t want to leave anything to chance, then Flex Links are preferred over STP.

The benefits of FlexLinks include:

  • simplicity, which equals stability.
  • instant failover.
  • rudimentary load balancing capabilities, so one link isn’t wastefully idle.
  • load balancing works across switches in a stack, including port channels.

Flex Links’ primary operating mode is just like spanning tree: one on, one off. With per-VLAN spanning tree, a trunk port can have some VLANs enabled and some blocked at the same time, so on the surface it seems that STP is superior. In reality, you can configure Flex Links to load balance VLANs, and we’ll show you how shortly.

Configuration

Conceptually, you configure Flex Links by telling one link it’s the active link, and another that it’s the backup of that

Flex Links Design Map

primary (active) one. Without configuring VLAN load balancing, it will completely disable the backup, and if the active link goes down the backup will take over.

For example, to configure port gi1/0/1 as a active link, and gi1/0/2 as the backup, you’d run:

Switch# configure terminal
Switch(conf)# interface gigabitethernet1/0/1
Switch(conf-if)# switchport backup interface gigabitethernet1/0/2

That’s all there is to configuring the basic mode, which gets you failover but no load balancing. Before talking about load balancing, let’s take a look at preemption and “mac address-table move update.”

Preemption

Preemption, that is, the preferred port for forwarding traffic, is also configurable. This is most often used in combination with multiple links that have differing bandwidth capacities. If you wish to ensure that port 1, a primary port that has more bandwidth, will return to the active link when it comes back up, you would set:  interface preemption mode bandwidth andswitchport backup interface preemption delay. The delay is used to set the amount of time (in seconds) to wait before allowing port 1 to preempt port 2 and begin taking over traffic again.

MAC Address-Table Move Update

Enabling the MAC address-table move update feature allows for rapid convergence when a primary link goes down and the backup takes over traffic forwarding duties. Without this feature enabled, neighboring switches may continue to forward traffic for a short time to a dead port, since they have learned MAC addresses associated with that link.

When move update is enabled, the switch containing Flex Links will broadcast an update packet to let other switches know what happened, and they will in turn un-learn that false MAC address mapping.

On the switch with Flex Links, simply configure:

Switch(conf)# mac address-table move update transmit

All switches, including ones with Flex Links, need to receive these updates. This is not enabled by default, so you’ll need to run the following command on all of your devices:

Switch(conf)# mac address-table move update receive

To see the status and verify that “move update” is enabled, run: show mac address-table move update. Checking the status of your Flex Links is much the same: show interfaces [interface-id] switchport backup.

Load Balancing

Flex Links should be configured such that both ports are forwarding traffic at the same time. This way, you get load balancing in addition to redundancy. The limitation is that only one port can be forwarding a single VLAN at a time. If we have VLANs 1-200, we need to choose which VLANs are forwarded primarily through which port. The most simple configuration, ignoring traffic requirements, would be that VLANs 1-100 use port 1, and VLANs 101-200 use port 2.

Before we get into configuring preferred VLANs, let’s talk about multicast. Multicast, of course, becomes an issue with this type of setup. If a port passed an IGMP join, and the switch is part of a multicast group, when the port goes down the switch will no longer be able to receive multicast traffic for that group. The quick fix is to make both Flex Links always be part of learned groups, with the command: switchport backup interface gigabitEthernet 1/0/12 multicast fast-convergence.

Now, on to VLAN load balancing. It is quite easy; just specify which VLANs you prefer on which links:

Switch(config-if)#switchport backup interface gigabitEthernet1/0/2 prefer vlan 101-200.

If you have VLANs 1-200 on the switch, show interfaces switchport backup will show you:

Vlans Preferred on Active Interface: 1-100
Vlans Preferred on Backup Interface: 101-200

If a link goes down, VLANs that are preferred on that interface will be moved to the other link in the pair. Likewise, when a link returns to service, its preferred VLANs are blocked on the backup and returned to the preferred link.

Be sure to run show interfaces switchport backup detail to see the full status, including link speeds, preemption modes, the MAC address-table move update status.

In summary, the simplicity of Flex Links make it a better choice for carrier and core enterprise networks over the ubiquitous spanning tree protocol. Link-level redundancy is had via STP, but with Flex Links you have more control and better load balancing capabilities. This certainly means that it takes longer to configure since you are planning the layer 2 network manually, but when you need a stable no-surprises link-layer network, Flex Links are definitely the way to go.


2 Comments »

Cisco AutoQoS: VoIP QoS for Mere Mortals

Posted: May 8th, 2010 | Author: | Filed under: Networking | Tags: , , , | No Comments »

WANs often need Quality of Service (QoS) configured to ensure that certain traffic is classified as “more important” than other traffic. Until now, it took a serious Cisco guru to configure a network properly for VoIP if the network was at all bandwidth constrained. AutoQoS, a new IOS feature for Cisco routers, makes deploying VoIP easy, even on busy WAN links. In this article we’ll cover the basics, what AutoQoS does, and some of its limitations.

The first whack at AutoQoS was Cisco recognizing the need to simplify VoIP traffic prioritization. VoIP is especially sensitive to any latency, jitter, or loss, and users will notice problems. To ensure the best possible VoIP call, the network must ensure that lower priority traffic does not interfere with time-sensitive VoIP. AutoQoS can be enabled on both WAN links and Ethernet switches to automatically provide a nice best-practices based template for VoIP prioritization. If you’re lucky enough to have metro Ethernet service, like AT&T ethernet service for example, you should contact your provider to find out if QoS settings on your switches can be duplicated through theirs.

How it Works

QoS allows a router to classify which types of traffic are most important, and ensure that that traffic passed as quickly as possible. If necessary, other traffic will be queued until the higher priority traffic has had a chance to pass. Before a router can know when to queue versus when to attempt to pass all traffic, it must be configured with bandwidth settings for each link.

Configuring QoS on a Cisco router normally involves a complex series of interactions, which require understanding not only the protocols, but a router’s strange way of associating policies. The basic steps are:

  • Use an ACL to define which traffic gets matched
  • A class-map classifies matched traffic into classes
  • A policy-map assigns priorities to the classes
  • The policy-map is applied to the interface, which enables the processing of all packets through the ACL, class-map, and policy-map

Each of these “maps” are quite complicated and prone to error. Most sites are going to be duplicating effort because of common problems, like VoIP, needing QoS help.

Why AutoQoS

QoS configuration is not simple. It requires understanding the protocols your network interfaces are using, as well as the type of data you’re passing. To configure QoS for VoIP, for example, you must understand how VoIP works. In short, it requires a guru. If you’re like me, you literally giggled out loud the first time you encountered the word, “AutoQoS.”

AutoQoS enables any network administrator to just “turn on” a solid solution for ensuring VoIP is happy. VoIP is the pain point for most organizations, so that’s what Cisco focused on first, and that’s what we’re focusing on here. Given the limited scope of AutoQoS, it’s believable that it works well enough. In reality, QoS configurations generally classify many types of traffic, and then place a priority on each one.

The main benefit of AutoQoS is that administrator training is much quicker. It also means that VoIP deployments often go much smoother, and upgrading WAN links isn’t usually required. Finally, AutoQoS creates templates that can be modified as needed and copied elsewhere for deployment.

Limitations

Before talking about how to enable AutoQoS, which is literally three commands, let’s talk about where this works best, and what’s required to use AutoQoS.

First and foremost, you can only configure AutoQoS on a few types of router interfaces. These interfaces include:

  • PPP or HDLC serial interfaces
  • ATM PVCs
  • Frame Relay (point-to-point links only)

Cisco catalyst switches also support an AutoQoS command to prioritize Cisco VoIP phones, but you cannot prioritize (using AutoQoS) generic VoIP protocols.

Next, there are some limitations with ATM sub-interfaces. If you have a low-speed ATM link (less than 768Kbps), then AutoQoS will only work on point-to-point sub-interfaces. Higher speed ATM PVCs are fully supported though. For standard serial links, AutoQoS is not supported at all on sub-interfaces. A quick litmus test to see if AutoQoS will work on your desired interfaces or not is to verify that the service-policy configuration is supported. If not, you’ll probably have to reconfigure some links.

AutoQoS will not work if an existing QoS configuration exists on an interface. Likewise, when you disable the AutoQoS configuration, any changes you may have made to the template after the initial configuration will be lost.

Bandwidth statements are used by AutoQoS to determine what settings it should use, so remember that after updating bandwidth statements in the future, you have to re-run the AutoQoS commands.

Making it Work

In the most standard situation, where VoIP isn’t performing as it was promised, the network admin can quickly save the day by running the following on the WAN interface:

interface Serial0
bandwidth 256
autoqos voip

If it’s the local network that needs tuning, the following can be run on Catalyst switches (if running Enhanced Images):

auto qos voip cisco-phone
auto qos voip trust

It really couldn’t be easier than that.  For the WAN example, we told the router that interface Serial0 has 256 Kbps, and to enable VoIP QoS. The switch example is similar, for Cisco phones.

The neat part about this is that AutoQoS is actually doing more than just generating a configuration for you and forgetting about it. If you run the command show autoqos interface s0, you will see much more than just your standard old interface configuration. It will show that a Virtual Template “interface” has been created, and that a class is applied to the interface. The same output will also show you the configuration of the template and class-map, with an asterisk next to each entry that was generated by AutoQoS. It’s actually keeping track of what was done automatically so that you can learn what AutoQoS is doing. As mentioned previously, however, don’t forget that removing the AutoQoS configuration will destroy all QoS settings on an interface, not just the ones that AutoQoS configured.

Finally, remember to enable QoS on both sides of a WAN link to truly prioritize VoIP packets. Don’t forget to read through the Cisco documentation before deploying it, even though AutoQoS is simple, in comparison. It is simple, but the more prepared you are the easier it is to deploy.

Cisco will hopefully continue this trend of providing Auto features for complicated, but common tasks. AutoQoS for VoIP sure does enable a much larger audience to correctly deploy VoIP over a wide variety of networks.


No Comments yet... be the first »

Networking 101: Layer 2, Link and Spanning Tree

Posted: March 17th, 2010 | Author: | Filed under: Networking 101 | Tags: , , | No Comments »

What’s more important than IP and routing? Well, Layer 2 is much more important when it’s broken. Many people don’t have the Spanning Tree Protocol (STP) knowledge necessary to implement a Layer 2 network that’s resilient. A switch going down shouldn’t prevent anyone from having connectivity, excluding the hosts that are directly attached to it. Before we can dive into Spanning Tree, you must understand the innerworkings of layer 2.

Layer 2, the Data Link layer, is where Ethernet lives. We’ll be talking about bridges, switching, and VLANs with the goal of discovering how they interact in this part of Networking 101. You don’t really need to study the internals of Ethernet to make a production network operate, so if you’re inclined, do that on your own time.

Ethernet switches, as they’re called now, began life as a “bridge.” Traditional bridges would read all Ethernet frames, and then forward them out every port, except the one it came in on. They had the ability to allow redundancy via STP, and they also began learning which MAC addresses were on which port. At this point, a bridge then became a learning device, which means they would store a table of all MAC addresses seen on a port. When a frame needed to be sent, the bridge could look up the destination MAC address in the bridge table, and know which port is should be sent out. The ability to send data to only the correct host was a huge advancement in switching; collisions were much less likely. If the destination MAC address wasn’t found in the bridge table, the switch would simply flood it out all ports. That’s the only way to find where a host actually lives for the first time, so as you can see, flooding is an important concept in switching. It turns out to be quite necessary in routing too.

Important terminology in this layer includes:

Unicast segmentation : Bridges can limit which hosts hear unicast frames (frames sent to only one MAC address). Hubs would simply forward everything to everyone, so this alone is a huge bandwidth-saver.

Collision Domain : The segment over which collisions can occur. Collisions don’t happen any more, since switches use cut-through forwarding and NICs are full-duplex. If you see collisions on a port, that means someone negotiated half-duplex accidentally, or something else is very wrong.

Broadcast Domain : The segment over which broadcast frames are sent and can be heard.

A few years later, the old store-and-forward method of bridge operation was modified. New switches started only looking at the destination MAC address of the frame, and then sending it instantly. Dubbed cut-through forwarding, presumably because frames cut through the switch much quicker and with less processing. This implies a few important things: a switch can’t check the CRC to see if the packet was damaged, and that implies collisions needed to be made impossible.

Now, to address broadcast segmentation, VLANs were introduced. If you can’t send a broadcast frame to another machine, they’re not on your local network, and you will instead send the entire packet to a router for forwarding. That’s what a Virtual LAN (VLAN) does, in essence–it makes more networks. On a switch, you can configure VLANs, and then assign a port to a VLAN. If host A is in VLAN 1, it can’t talk to anyone in VLAN 2, just as if they lived on totally disconnected devices. Well, almost; if the bridge table is flooded and the switch is having trouble keeping up, all data will be flooded out every port. This has to happen in order for communication to continue in these situations. This needs to be pointed out because many people believe VLANs are a security mechanism. They are not even close. Anyone with half a clue about networks (or with the right cracking tool in their arsenal) can quickly overcome the VLAN broadcast segmentation. In fact, a switch will basically turn into a hub when it floods frames, spewing everyone’s data to everyone else.

If you can’t ARP for a machine, you have to use a router, as we already know. But does that mean you have to physically connect wires from a router into each VLAN? Not anymore, we have layer 3 switches now! Imagine for an instance, if you will, a switch that contains 48 ports. It also has VLAN 1 and VLAN 2, and ports 1-24 are in VLAN 1, while ports 25-48 are part of VLAN 2. To route between the two VLANs, you have basically three options. First, you can connect a port in each VLAN to a router, and assign the hosts the correct default route. In the new-fangled world of today, you can also simply bring up two virtual interfaces in each VLAN. In Cisco land, the router interfaces would be called vlan1 and vlan2. They get IP addresses, and the hosts use the router interface as their router.

The third way brings us to the final topic of the layer 2 overview. If you have multiple switches that need to contain the same VLANs, you can connect them together so that VLAN 1 on switch A is the same as VLAN 1 on switch B. This is accomplished with 802.1q, which will tag the packets as they leave the first switch with a VLAN identifier. Cisco calls these links “trunk ports,” and you can have as many VLANs on them as the switch allows (currently 4096 on most hardware). So, the third and final way to route between VLANs is to connect a trunk to a router, and bring up the appropriate interfaces for each VLAN. The hosts on VLAN 1, on both switch A and B will have access to the router interface (which happens to be on another device) since they are all “trunked” together and share a broadcast domain.

We’ve saved you from the standard “this is layer 2, memorize the Ethernet header” teaching method. To become truly guru you must know it, but to be a useful operator, (something the cert classes don’t teach you) simply understand how it all works. Join us next time for an exploration of most interesting protocol in the world, Spanning Tree.


No Comments yet... be the first »

Networking 101: Understanding Layers

Posted: March 7th, 2010 | Author: | Filed under: Networking 101 | Tags: , , , , | No Comments »

Continuing our journey, it’s time to take a trip up the OSI Reference Model, and learn what this mysterious thing is all about. The network stack is of great significance, but not so much that it’s the first thing you should learn. The networking 101 series has waited to ensue the “layers” discussion for good reason. Many so-called networking classes will start by teaching you to memorize the name of every layer and every protocol contained within this model. Don’t do that. Do realize that layers 5 and 6 can be completely ignored, though.

The International Standards Organization (ISO) developed the OSI (Open Systems Interconnection) model. It divides network communication into seven layers. Layers 1-4 are considered the lower layers, and mostly concern themselves with moving data around. Layers 5-7, the upper layers, contain application-level data. Networks operate on one basic principle: “pass it on.” Each layer takes care of a very specific job, and then passes the data onto the next layer.

The physical layer, layer 1, is too often ignored in a classroom setting. It may seem simple, but there are aspects of the first layer that oftentimes demand significant attention. Layer one is simply wiring, fiber, network cards, and anything else that is used to make two network devices communicate. Even a carrier pigeon would be considered layer one gear (see RFC 1149). Network troubleshooting will often lead to a layer one issue. We can’t forget the legendary story of CAT5 strung across the floor, and an office chair periodically rolling over it leading to spotty network connectivity. Sadly, this type of problem exists quite frequently, and takes the longest to troubleshoot.

Layer two is Ethernet, among other protocols; we’re keeping this simple, remember. The most important take-away from layer 2 land is that you should understand what a bridge is. Switches, as they’re called nowadays, are bridges. They all operate at layer 2, paying attention only to MAC addresses on Ethernet networks. The common fledgling network admin always seem to mix up layers two and three. If you’re talking about MAC address, switches, or network cards and drivers, you’re in the land of layer 2. Hubs live in layer 1 land, since they are simply electronic devices with zero layer 2 knowledge. Layer two will have it’s own section in Networking 101, so don’t worry about the details for now, just know that layer 2 translates data frames into bits for layer 1 processing.

On the other hand, if you’re talking about an IP address, you’re dealing with layer 3 and “packets” instead of layer 2’s “frames.” IP is part of layer 3, along with some routing protocols, and ARP (Address Resolution Protocol). Everything about routing is handled in layer 3. Addressing and routing is the main goal of this layer.

Layer four, the transport layer, handles messaging. Layer 4 data units are also called packets, but when you’re talking about specific protocols, like TCP, they’re “segments” or “datagrams” in UDP. This layer is responsible for getting the entire message, so it must keep track of fragmentation, out-of-order packets, and other perils. Another way to think of layer 4 is that it provides end-to-end management of communication. Some protocols, like TCP, do a very good job of making sure the communication is reliable. Some don’t really care if a few packets are lost–UDP is the prime example.

And arriving at layer seven, we wonder what happened to layer 5 and 6. They’re useless. A few applications and protocols live there, but for the understanding of networking issues, talking about these provides zero benefit. Layer 7, our friend, is “everything.” Dubbed the “Application Layer,” layer 7 is simply application-specific. If your program needs a specific format for data, you will invent some format that you expect the data to arrive in, and you’ve just created a layer 7 protocol. SMTP, DNS, FTP, etc, etc are all layer 7 protocols.

The most important thing to learn about the OSI model is what it really represents. Pretend you’re an operating system on a network. Your network card, operating at layers 1 and 2, will notify you when there’s data available. The driver handles the shedding of the layer 2 frame, which reveals a bright, shiny layer 3 packet inside (hopefully). You, as the operating system, will then call your routines for handling layer 3 data. If the data has been passed to you from below, you know that it’s a packet destined for yourself, or it’s a broadcast packet (unless you’re also a router, but never mind that for now). If you decide to keep the packet, you will unwrap it, and reveal a layer 4 packet. If it’s TCP, the TCP subsystem will be called to unwrap and pass the layer 7 data onto the application that’s listening on the port it’s destined for. That’s all!

When it’s time to respond to the other computer on the network, everything happens in reverse. The layer 7 application will ship its data onto the TCP people, who will stick additional headers onto the chunk of data. In this direction, the data gets larger with each progressive step. TCP hands a valid TCP segment onto IP, who give its packet to the Ethernet people, who will hand it off to the driver as a valid Ethernet frame. And then off it goes, across the network. Routers along the way will partially disassemble the packet to get at the layer 3 headers in order to determine where the packet should be shipped. If the destination is on the local Ethernet subnet, the OS will simply ARP for the computer instead of the router, and send it directly to the host.

Grossly simplified, sure; but if you can follow this progression and understand what’s happening to every packet at each stage, you’re just conquered a huge part of understanding networking. Everything gets horribly complex when you start talking about what each protocol actually does. If you are just beginning, please ignore all that stuff until you understand what the complex stuff is trying to accomplish. It makes for a much better learning endeavor! In future Networking 101 articles we will begin the journey up the stack, examining each layer in detail by discussing the common protocols and how they work.


No Comments yet... be the first »

Networking 101: More Subnets, and IPv6

Posted: February 27th, 2010 | Author: | Filed under: Networking, Networking 101 | Tags: , , , | 3 Comments »

What’s the point of creating subnets anyways? How do I remember those strange looking subnet masks? How the heck does this work with those crazy looking IPv6 addresses? This edition of Networking 101 will expand on the previous Subnets and CIDR article, in the interest of promoting a thorough understanding of subnetting.

An oft-asked question in networking classes is “why can’t we just put everyone on the same subnet and stop worrying about routing?” The reason is very simple. Every time someone needs to talk, be it to a router or another host, they have to send an ARP request. Also, there’s broadcast packets that aren’t necessarily limited to ARP, which everyone hears. When there are only 255 devices on a /24 subnet, the amount of broadcast packets are fairly limited. It is important to keep this number low, because every time a packet destined for a specific host or a broadcast address is seen, the host must handle the packet. A hardware interrupt is created, and the kernel of the operating system must read enough of the packet to determine whether or not it cares about it.

Broadcast storms happen at times, mainly because of layer 2 topology loops. We’ll explain layer 2 topology issues in excruciating (actually, enlightening) detail in a future issue. When thousands of packets hit a computer at a time, slow and fast computers alike can become very slow. The kernel spends so much time handling interrupts that it doesn’t have much left for dealing with “trivial” things like making sure your web browser process gets a chance to run. So that, my friends, is why subnets are very important. This is also known as a broadcast domain, because it limits the amount of broadcasts that you will hear.

The natural follow-up question normally involves a host’s notion of a broadcast address and netmask. We hopefully understand that a host needs to understand what computers are on the same subnet. Those IP addresses can be spoken to directly, making a router unnecessary. When the netmask or broadcast address is incorrectly configured, you’ll quickly find that some hosts are unreachable.

The most common erroneous configuration happens when someone configure an IP address without specifying the netmask and broadcast address. For some reason, most operating systems don’t take the liberty of updating these things, even though one can be determined from the other. If you run ‘ifconfig eth0 130.211.0.1 netmask 255.255.255.0’ you might expect that everything is ready to go. Unfortunately, it’s very likely that your broadcast address was set to 255.255.0.0. It largely depends on the router’s configuration, but normally this results in all broadcast packets being dropped. Conversely, if the netmask is configured incorrectly, the computer wouldn’t know where the subnet starts and begins. If a computer thinks a host is on the same subnet when it actually isn’t, it will attempt to ARP for it instead of the router. Routers can be configured to handle this and pretend they are the host (called Proxy Arp), but normally the result is unreachable hosts.

Understand how the netmask is configured, to avoid this problem. Figuring out the network and broadcast address isn’t very difficult when you remember that the netmask simply means “cover some bits,” but deciphering netmask representation can induce a double-take. The netmask for a /24 network is 255.255.255.0, that’s easy. But what does 255.255.240.0 mean? The best way to decipher it is to begin with the masked off part. Comparing it to the /24, which had three octets masked, we see that 255.255.240.0 has two octets masked, and part of another. We know it’s between a /16 and a /24. We have to understand binary, and realize how many bits are masked. The last 16 bits are clearly part of the network portion. The third octet, 240, allows 16 IP addresses beyond the mask, so it must mean that four bits are left (2^4=16). The four remaining bits, plus the 16 bits used for the first two octets means that we’re dealing with a /20!

What about 1.0.0.0/255.255.255.248? We’re definitely in a land smaller than the /24 subnet. If we look at the remaining bits in the last octet, we can see that there are eight IP addresses available. Remember that only 2^3 can make eight, so we’re using all but three bits in the network portion. This is a /29 network. Of course, the easy ones are pretty clear: 255.255.255.128 allows half as many host addresses in the last octet compared to the /24 network, so it’s a /25.

On the topic of confusing netmasks, IPv6 addresses certainly have a place. The netmask isn’t really an issue–the same concept applies, just with larger numbers to remember. The real problem lies within the address representation itself; the IETF seemed to take pride in creating confusion. Typically an IPv6 address is represented in hex, or base-16. Our old friend IPv4 could represent an IP address in hex too, which would look like B.B.B.B for the address 11.11.11.11. Unfortunately, IPv6 isn’t quite that nice looking. To represent 128 bits, IPv6 normally breaks up the address into eight 16-bit segments.

An IPv6 address looks like: 2013:4567:0000:CDEF:0000:0000:00AD:0000. It does get a bit easier. For example, leading zeros are not written, and contiguous quads of zeros get collapsed to ::. Trailing zeros ,however, must be shown. This is a bit confusing, but the rules always allow for a non-ambiguous IP address. Leading zeros in each quad can always be removed, but the collapsing of contiguous blocks of zeros can only happen once per address. The above address with collapsed zeros will look like: 2013:4567:0000:CDEF::AD:0000. IPv6 provides 2^128 addresses, more than enough to allocate roughly 1000+ IP addresses per square meter of the earth.

If you remember the rules of binary, the address representation rules with IPv6, and a few simple subnets for reference, you’ll be Master of Subnets – the one who everyone asks for help.


3 Comments »

Networking 101: Subnetting – Slice Up 32-bits

Posted: February 20th, 2010 | Author: | Filed under: Networking 101 | Tags: , , , , | No Comments »

Welcome to networking 101, edition two. This time around we’ll learn about subnets and CIDR, hopefully in a more manageable manner than some books present it.

But first, let’s get one thing straight: there is no Class in subnetting. In the olden days, there was Class A, B, and C networks. These could only be divided up into equal parts so VLSM, or Variable Length Subnet Masks, were introduced. The old Class C was a /24, B was a /16, and A was a /8. That’s all you need to know about Classes. They don’t exist anymore.

An IP address consists of a host and a network portion. Coupled with a subnet mask, you can determine which part is the subnet, how large the network is, and where the network begins. Operating systems need to know this information in order to determine what IP addresses are on the local subnet and which addresses belong to the outside world and require a router to reach. Neighboring routers also need to know how large the subnet is, so they can send only applicable traffic that direction. Divisions between host and network portions of an address are completely determined by the subnet mask.

Classless Internet Domain Routing (CIDR), pronounced “cider,” represents addresses using the network/mask style. What this really means is that an IP address/mask combo tells you a lot of information:

network part / host part
0000000000000000/0000000000000000

The above string of 32-bits represents a /16 network, since 16 bits are masked.

Throughout these examples (and in the real world), certain subnet masks are referred to repeatedly. They are not special in any way; subnetting is a simple string of 32 bits, masked by any number of bits. It is, however, helpful for memorizing and visualizing things to start with a commonly used netmask, like the /24, and work from there.

Let’s take a look at a standard subnetting table, with a little bit different information:

Subnet mask bits

Number of /24 subnets

Number of addresses

Bits stolen

/24

1

256

0

/25

2

128

1

/26

4

64

2

/27

8

32

3

/28

16

16

4

/29

32

8

5

/30

64

4

6

/31

128

2

7

Because of the wonders of binary, it works out that a /31 has two IP addresses available. Imagine the subnet: 2.2.2.0/31. If we picture that in binary, it looks like:

00000010.00000010.00000010.00000000 (2.2.2.0)
11111111.11111111.11111111.11111110 (31)

The mask is “masking” the used bits, meaning that the bits are used up for network identification. The number of host bits available for tweaking is equal to one. It can be a 0 or a 1. This results in two available IP addresses, just like the table shows. Also, for each additional bit used in the netmask (stolen from the network portion), you can see that the number of available addresses gets cut in half.

Let’s figure out the broadcast address, network address, and netmask for 192.168.0.200/26. The netmask is simple: that’s 255.255.255.192 (26 bits of mask means 6 bits for hosts, 2^6 is 64, and 255-64 is 192). You can find subnetting tables online that will list all of this information for you, but we’re more interested in teaching people how to understand what’s happening. The netmask tells you immediately that the only part of the address we need to worry about is the last byte: the broadcast address and network address will both start with 192.168.0.

Figuring out the last byte is a lot like subnetting a /24 network, but you don’t even need to think about that, if it doesn’t help you. Each /26 network has 64 hosts. The networks run from .0 to .64, .65 to .128, .128 to .192, and from .192 to .256. Our address, 192.168.0.200/26, falls into the .192 to .256 netblock. So the network address is 192.168.0.192/26. And the broadcast address is even simpler: 192 is 11000000 in binary. Take the last six bits (the bits turned “off” by the netmask), turn them “on”, and what do you get? 192.168.0.255. To see if you got this right, now compute the network address and broadcast address for 192.168.0.44/26. (Network address: 192.168.0.0/26; broadcast 192.168.0.63).

It can be hard to visualize these things at first, and it helps to start with making a table. If you calculated that you wanted subnets with six hosts in each of them, (eight, including the network and broadcast address that can’t be used) then you can start making the table. The following is 2.2.2.0/29, 2.2.2.8/29, 2.2.2.16/29 and the final subnet of 2.2.2.249/29.

Subnet Number

Network Address

First IP

Last IP

Broadcast Address

1

2.2.2.0

2.2.2.1

2.2.2.6

2.2.2.7

2

2.2.2.8

2.2.2.9

2.2.2.14

2.2.2.15

3

2.2.2.16

2.2.2.17

2.2.2.22

2.2.2.23

32

2.2.2.249

2.2.2.250

2.2.2.254

2.2.2.255

In reality, you’re much more likely to stumble upon a network where there’s three /26’s and the final /26 is divided up into two /27’s. Being able to create the above table mentally will make things much easier.

That’s really all you need to know. It gets a little trickier with larger subnets in the /16 to /24 range, but the principal is the same. It’s 32 bits and a mask. Do, however, realize that there are certain restrictions governing the use of subnets. We cannot allocate a /26 starting with 10.1.0.32. If we utter the IP/mask of 10.1.0.32/26 to most operating systems, they will just assume we meant 10.1.0.0/26. This is because the /26 space requires 64 addresses, and they must start at a natural bit boundary for the given mask. In the above table, what would 2.2.2.3/29 mean? It means you meant to say 2.2.2.0/29.

Those tricky ones do demand a quick example. Remember how the number of IP addresses in a subnet gets halved when you take another bit from the network side to create a larger mask? The same concept works in reverse. If we have a /25 that holds 128 hosts, and steal a bit from the host (netmask) portion, we now have a /24 that holds 256. Google for a “subnet table” to see the relationship between netmasks and network sizes all at once. If a /16 holds 65536 addresses, a /17 holds half as many, and a /15 holds twice as many. It’s tremendously exciting! Practice, practice, practice. That’s what it takes to understand how this works. Don’t forget, you can always fall back to counting bits.

The next step, should you want to understand more about subnets, is to read up on some routing protocols. We’ll cover some of them soon, but in the next installment of Networking 101, we’re starting our trip up the OSI model.


No Comments yet... be the first »