Say what? QoS in English

Most of you are familiar with quality of service (QoS), but would you be able to explain the concept's complex technical terms to a colleague using plain English? You may need to attempt this challenging task soon if you haven't already.

QoS is a way to allocate resources in switches and routers so data gets to its destination quickly, consistently and reliably. As applications increasingly demand high bandwidth and low delay, QoS is becoming a top purchasing criterion for internetwork hardware buyers and a key way for vendors to differentiate their products.

However, it's not always easy to understand the product literature. QoS has its own language, and vendors use a baffling array of terms and concepts to describe how their products provide this capability.

Consider this example from a press release for Cisco Systems' Catalyst 8500 switch: "The switching fabric supports per-flow queuing, differentiated delay priorities using a weighted round-robin scheduler for delay-sensitive applications, and differentiated loss priorities for managing congestion as well as traffic policing and shaping. The fast packet memory embedded in the switching fabric is allocated dynamically on a per-queue (flow) basis. This dynamic allocation, used in conjunction with user-defined queue thresholds and configurable queue scheduling weights, ensures that time-sensitive traffic is handled properly with no packet loss."

OK, got that? If so, you can stop here. If not, keep reading for a crash course in the QoS lexicon that will make it easier for you to understand various QoS capabilities and compare products.

There are only a few ways to provide QoS in networks. The simplest approach is to throw bandwidth at the problem, which is known as overengineering the network. QoS can also be provided using features and capabilities such as data prioritization, queuing, congestion avoidance and traffic shaping. Policy-based networking will one day tie all these features together in an automated system that ensures end-to-end QoS.

Overengineering is the simplest and, arguably, most effective means of ensuring QoS in the LAN. Pressure from competitors, new chip fabrication processes that allow a greater number of functions to be integrated into one Application Specific Integrated Circuit (ASIC) and newfound manufacturing efficiencies allow LAN switch vendors to continually offer faster products at prices comparable to existing ones. So it's not likely that overengineering will be replaced by other QoS alternatives any time soon.

But if the QoS features under development for LAN switches could be deployed without costly hardware upgrades or complex changes to network management, network managers might be more inclined to consider implementing QoS systems rather than relying on overengineering.

Most likely, a combination of overengineering and QoS features will emerge as the solution of choice. Several vendors favor this approach, saying it's better to overengineer the network by throwing smart bandwidth, rather than raw bandwidth, at the problem. Of course, vendors are hardly objective: They use software capabilities such as QoS to differentiate their products.

In the WAN, overengineering is less practical. Declining WAN bandwidth costs will make higher speeds more affordable, somewhat mitigating the need for QoS in the WAN. However, WAN bandwidth costs will still be a significant expense for most corporations, so overengineered WANs will never be as prevalent as overengineered LANs.

Setting priorities

In lieu of overengineering, data prioritization and queuing systems provide the most mainstream QoS tools available today. Routers have supported data prioritization and queuing for many years. Some new Gigabit Ethernet switches are designed to support data prioritization and queuing, but policy-based management software to fully harness the technology isn't yet available. These new switches include 3Com's CoreBuilder 3500 and 9000 and SuperStack II, Bay Networks' Accelar models, Cabletron Systems' SmartSwitch Router, and Cisco's Catalyst 5000 and 8000.

Data prioritization systems can be characterized as implicit or explicit. With implicit QoS, a router or switch automatically allocates service levels based on administrator-specified criteria, such as the type of application, protocol or source address. Every incoming packet is examined or filtered to see if it meets the specified criteria.

Just about all routers support implicit QoS. Several switches also are designed to provide implicit QoS, but only offer limited prioritization capabilities. For example, the switches can prioritize based on type of virtual LAN and source or destination address rather than higher level information such as application or protocol type. Emerging policy-based network systems will bring more robust prioritization capabilities to these switches.

Explicit QoS, in contrast, lets the user or application request a particular level of service, and switches and routers attempt to meet the request. IP Precedence, also called IP Type Of Service (TOS), is likely to become the most widely used explicit QoS technique.

Part of the IP Version 4 protocol, IP TOS reserves a field in the IP packet where delay, throughput and reliability service attributes can be specified. The latest version of Winsock in Windows 98 and NT lets administrators use applications to set the field. With the exception of multimedia software, few popular applications support IP TOS.

The emerging Resource Reservation Protocol (RSVP) is more sophisticated than IP TOS. RSVP specifies its own signaling mechanism for communicating an application's QoS requirements to a router. Like IP TOS, RSVP has not been widely implemented by application vendors. Although some routers support RSVP, the protocol isn't considered mature enough for widespread deployment because of scalability concerns. RSVP imparts a significant processing load on routers and could cause performance degradation.

Implicit QoS is likely to remain more popular than explicit QoS for the foreseeable future. Implicit QoS doesn't require as much router processing. More important, any explicit QoS technique is a potential management nightmare. Given the chance, end users are likely to configure their software to ask for the best possible service level. Administrators would probably need to establish rules for users and perhaps even configure QoS on a per-user basis.

Queuing up

Once data is prioritized using implicit or explicit techniques, queues and queuing algorithms are used to provide the appropriate or desired QoS.

Queues, which are simply areas of memory within a router or switch, are set up to contain different priority packets. A queuing algorithm determines the order in which packets stored in the queues are transmitted. The idea is to give better service to high-priority traffic while ensuring, to varying degrees, that low-priority packets get some service.

If congestion occurs, the queuing system does not guarantee crucial data will reach its destination in a timely manner; it only ensures that high-priority packets will get there before low-priority packets.

More sophisticated QoS systems solve this problem with bandwidth reservation systems, which assign prespecified amounts of bandwidth to individual queues or groups of queues. This ensures that bandwidth is always available for a high-priority queue. QoS is guaranteed unless the data in a queue exceeds the amount of reserved bandwidth. If this happens, the algorithms usually allow bandwidth from low-priority queues to service high-priority traffic, and vice-versa.

Basic queuing algorithms transmit packets from the same queue in a FIFO order. Large frames associated with a high-priority file transfer may delay a transaction processing application that passes small amounts of data, even though packets from both applications are classified as high priority.

More sophisticated queuing algorithms attempt to be fairer. For example, Cisco's weighted fair queuing (WFQ) differentiates among bandwidth-hogging applications and those that need less bandwidth, and distributes the bandwidth to all applications in equal amounts. Most router vendors have developed unique queuing algorithms and use their own terms to describe them.

One fundamental limitation of today's routers and switches is the small number of queues the devices have for QoS. While four queues are common, additional ones would facilitate more granular prioritization and greater fairness. For example, administrators could establish a queue to give preference to high-priority packets that need to travel to a far-flung destination.

Per-flow queuing establishes queues on a per-flow basis, which means each user session gets its own queue. The architecture has been implemented in switches based on MMC Networks' Anyflow 5500 chip set, including the Cisco Catalyst 8500 and Arrowpoint Communications' Content Smart Switch. But the trade-off associated with increasing the number of queues is greater complexity, which drives up costs and complicates configuration and management.

Clearing up congestion

Congestion control and avoidance mechanisms are other important aspects of QoS.

Congestion control allows end stations to throttle their transmission rates and slow traffic if the network drops packets. TCP/IP and SNA have supported congestion control for many years. By itself, congestion control does little to ensure QoS.

However, congestion control becomes more powerful when it's paired with congestion avoidance. Congestion avoidance in the TCP/IP world is relatively new, but is fast becoming a standard feature in ISP- and carrier-class routers.

Random early detection (RED) has emerged as the standard congestion avoidance method. In basic form, RED randomly drops packets as queues fill up, causing end stations to decrease their transmission rates so queues won't overflow. Weighted RED (WRED) improves on RED by dropping packets based on IP TOS. Cisco's 7000 and 12000 series backbone routers and Bay's Backbone Node routers support RED and WRED, as will forthcoming ISP-class gigabit and terabit routers from start-up vendors such as Argon Networks, Inc., Avici Systems, Inc., Juniper Networks, Inc., NetCore Systems, Inc. and Nexabit Networks, Inc.

Shaping up packets

Traffic shaping refers to a variety of techniques for manipulating and modifying data to help ensure QoS, such as packet segmentation. One of the reasons ATM networks provide high QoS is because of their use of small packets, or cells. The maximum amount of time any cell can be delayed is the time it takes to transmit one cell.

Borrowing from ATM, router and switch vendors are adding segmentation capabilities to their products. Cisco's 12000 series routers internally segment packets across the backplane into 64-byte packets, which helps to ensure consistent QoS within the router. Several frame relay equipment vendors segment packets for transmission over WAN links as a means of ensuring predictable delivery and minimal delay.

Traffic metering is another form of traffic shaping. A number of protocols such as AppleTalk exhibit a tendency to transmit packets unevenly, which is sometimes known as creating trains of packets. Traffic metering spaces out the trains prior to transmission by temporarily storing packets in buffers to make sure the network isn't overloaded. Metering also can be used at the edge of a network to mitigate the effect of bursts.

Putting it all together

No matter which QoS capabilities a switch or router implements, the device works by itself to get data to its destination. For example, a packet could proceed through the first few devices and links with no problem, and then encounter a link that prevents proper QoS from being provided. Because the devices that the packet has already traversed function independently, they can't take steps to avoid the defective link.

But forthcoming policy-based management systems will ultimately tie all the QoS capabilities discussed above into one cohesive system to ensure end-to-end QoS.

Policy servers in conjunction with existing network monitoring and management software will monitor the network to determine optimum QoS settings and dynamically configure routers and switches.

The policy servers will also consult network directories such as Novell Directory Services to determine the appropriate service levels specific users and applications require. Policy servers and directories will typically use Lightweight Directory Access Protocol to communicate.

Policy servers still aren't available, but the products are expected to start shipping soon. Bay is scheduled to release a policy server next quarter based on its NetID TCP/IP address management software platform. Called Optivity Policy Services, the new product will work with Bay's Contivity Extranet Switches, Accelar Routing Switches and all Bay routers running BayRS routing software.

3Com also is expected by year-end to ship a stand-alone policy-based management system. Called PolicyPowered Networking, the product will work with 3Com's CoreBuilder 3500 and 9000 LAN backbone switches, SuperStack II LAN edge switches and PathBuilder WAN access and backbone switches.

Cisco is scheduled to debut CiscoAssure Policy Networking in the first quarter of 1999. The policy-based management system will support Cisco routers and Catalyst 5000 and 8000 LAN switches.

Cabletron pioneered the idea of policy-based management in 1994 when it announced SecureFast Virtual Networking. SecureFast wasn't successful in gaining market acceptance, but the technology will be incorporated in Cabletron's upcoming policy server. The company has not yet released a ship date for the new server.

It remains to be seen how easy to use and cost-effective these QoS systems will be. Proper network design is crucial to the success of your implementation. QoS is too costly and complex to implement everywhere, and even the most robust QoS capabilities can't overcome poor network design. In campus nets, it makes the most sense to implement QoS in the backbone. In WANs, QoS is better suited for the edge of the network.

Translating vendor-speak

Now that you know the QoS lexicon, the Cisco press release mentioned in the beginning of the story should make a little more sense. The statement means the Catalyst 8500 switch can place every user session in its own queue and can use a queuing algorithm such as WFQ to provide the most appropriate service levels to every flow. The switch manages congestion and also provides traffic policing and shaping.

Queues are created for every user session. All this, along with WFQ, ensures QoS.

This story, "Say what? QoS in English" was originally published by Network World.

From CIO: 8 Free Online Courses to Grow Your Tech Skills
Join the discussion
Be the first to comment on this article. Our Commenting Policies