ATM is a monument to the hazards of speculating about the future direction of telecommunications networks. Less than a decade ago most industry pundits were confident that ATM was the service of the future. It would span from desktop to desktop, handling all kinds of media as streams of short cells. Data would flow seamlessly between endpoints without a media conversion, delivering whatever bandwidth and service quality the application and the network negotiated. What happened to this vision? Several things. For one, ATM has proven to be unexpectedly complex and costly for a general-purpose protocol, especially considering the other alternatives. While developers were working on 155 Mbps ATM and trying to scale it down economically to 25 Mbps to the desktop, Ethernet did an end run. Fast Ethernet switching had three times the throughput at a fraction of the cost and on its heels came gigabit Ethernet. They are cheap, fast, and easy to implement; so much so that ATM is left with no place in the local network.
Another factor was the ascent of IP. A major attraction of ATM is its ability to handle time-sensitive media with service quality equivalent to circuit switching. IP networks lacked QoS guarantees, and were not seamless between the local and the wide area. The IETF, with representation from users and manufacturers directed their resources toward developing protocols that can provide QoS approaching the promise of ATM. Now, the major IXCs offer IP VPNs as their migration path from frame relay. Much of this backbone runs over ATM, but as we discuss in the next chapter, MPLS is gaining a foothold in the network core.
So what role does that leave for ATM? The answer is in the backbone for carriers and large enterprises, where ATM is alive and well. ATM is the service platform for frame relay, DSL, private line, IP, and carrier TDM switch interconnection. IP enhancements are threatening ATM in the backbone, but the standards for delivering quality over IP are still evolving, while QoS is inherent with ATM, a considerable amount of which is already operational. Although MPLS is coming on line to harden IP networks, the transition will take time. With the backbone network in mind as ATM’s turf, let us turn to a high-level understanding of ATM, its method of operation, and its classes of service.
ATM TECHNOLOGY
ATM is a multiplexing and switching technology that is also known as broadband ISDN (B-ISDN). The ISDN term may give a wrong impression in this context because it has connotations of circuit switching. While ATM can behave like circuit switching in that it is connection oriented and provides guaranteed capacity and constant latency, it has the topology of a packet network. ATM carries the information payload in short PDUs known as cells. The reason it is called asynchronous can be understood by contrasting it to TDM, in which information streams are assigned to fixed time slots. If an application has data to send, it must wait for its time slot, even though other slots are unused. TDM wastes media capacity, but it gains simplicity in the process because the time slot is identified by its bit position and multiplexing is simple and inexpensive. ATM can make use of this empty capacity by multiplexing data asynchronously into time slots and attaching headers to identify the data flow as a stream of cells.
ATM cells are 53 octets long. Each cell has a five-octet header, and a 48-octet payload. Note that the header, which is shown in Figure 35-1, contains a virtual path indicator (VPI) and a virtual channel indicator (VCI). These correspond to ATM’s two types of circuits: virtual paths and virtual channels. Virtual channels are analogous to virtual circuits. They are defined between endpoints, and share the bandwidth with other channels. Virtual paths are bundles of virtual channels as depicted in Figure 35-2. If two switches have many different virtual channels between them, they can bundle them into a virtual path connection.
ATM is optimized for multimedia traffic because of these unique characteristics:
-It provides multiple classes of service so the user can match the application to the required grade of service.
-It is scalable in link speeds from T1/E1 to OC-192 (10 Gbps).
-Switching is done in hardware, which results in low latency and minimal jitter.
-It supports virtual channels that are equivalent to circuit switching for time-sensitive traffic.
-It supports bandwidth on demand for bursty traffic.
-It is an international standard that is supported by a wide variety of equipment.
Connections between endpoints are either provisioned as PVCs or set up per session as SVCs. SVCs are set up with a signaling protocol and remain active for the duration of a session. In case of failure, SVCs can be dynamically rerouted. They are advantageous for direct connection between sites where the traffic volume is not sufficient to justify the cost of PVCs. With PVCs, each switch in the path must be individually provisioned for the PVC. Also, the path is static, so it lacks the resiliency of a connectionless service, but ATM also includes a connectionless service similar to SMDS.
Virtual channel connections (VCCs) are concatenations of virtual channels that carry a stream of cells in sequence over an end-to-end connection. When the virtual circuit is defined, the VCC control assigns the circuit to a VCI and a VPI. As the connection is set up through the switch serving a particular node, the switch must connect a VPI and VCI from an input port to a VPI and VCI on an output port. Figure 35-3 should help clarify this concept using two-octet VPIs and VCIs. The ATM VPI is actually eight octets and the VCI is 16 octets long. The VPI and VCI are selected at the switch to keep track of the connections, and have no end-to-end significance.
Rationale for Fixed-Length Cells
ATM’s short cell is the key to its ability to handle time-sensitive traffic without excessive delay or jitter. Since the cell length is fixed, an ATM switch needs to look only at the VPI and VCI in the header to switch the cell to an output port. Because of the fixed length cells, the performance of the network is more predictable than one based on variable-length frames, and buffers at the switch nodes are easier to manage. Furthermore, as the load increases long packets cannot delay time sensitive packets. The circuitry can be programmed into an ASIC with minimal processing compared to routing.
The short cell works well for voice and video, but for data a five-octet header represents almost a 10 percent overhead. Data networks make more efficient use of bandwidth by using long packets where the header length is insignificant compared to the payload. The 53-octet cell length is not magic. It arose through compromise, not engineering analysis. When ATM standards were being designed, the data faction wanted a payload size of 64 octets, while voice advocates held out for 32 octets. They split the difference.
Complexity and cost aside, data network engineers prefer TCP/IP with its variable length packet. Long data packets increase throughput, but if multiple media share the network, the protocols must prevent long data packets from delaying short voice packets. If voice packets are forced to wait in queue while the router transmits long data packets, jitter increases to the point that it cannot be buffered without exceeding latency objectives. This means time-sensitive packets must be tagged and prioritized, steps that are unnecessary in ATM. Data engineers refer to the ratio between the ATM header and payload as the “cell tax,” and the emphasis from the IETF is to use IP for all media. From an overhead efficiency standpoint, when an IP network carries voice, the header comprises a much higher portion of the total packet length than the cell structure. An uncompressed VoIP signal has 44 octets of header for 160 octets of voice, or 21.5 percent overhead. The overall efficiency of the network depends on the type of traffic it is carrying. If time-sensitive traffic predominates, ATM is more efficient than IP, but TCP/IP supports higher throughput for data.
ATM Network Interfaces
An ATM network is composed of switches connected by high-speed links. The user connects to the network through the UNI. Two types of UNI are defined. A public UNI defines the interface between a public ATM network and a private ATM switch. A private UNI defines the interface between the user and a private or public ATM switch. ATM switches within the same network interconnect through NNIs. A public NNI defines the interface between public network nodes. A B-ISDN Inter-Carrier Interface (B-ICI) supports user services across multiple public carriers. Figure 35-4 illustrates these interfaces.
Private network-to-network interface (PNNI) is a routing protocol that enables different manufacturers’ ATM switches to be integrated into the same network. It is capable of setting up point-to-point and point-to-multipoint connections. PNNI automates routing table generation, which enables any ATM switch to automatically discover the network topology and determine a path to another switch. In determining the route, it uses such metrics as cost, capacity, delay, jitter, and active data such as peak and average load.
The headers are slightly different for NNIs and UNIs. Figure 35-1 shows the UNI header, which is identical to the NNI header except that the latter extends the VPI, and omits the generic flow control, which is sometimes used to identify multiple stations that share a single ATM interface. The payload type indicates in the first bit whether the cell contains user or control data. If it is set to one, the field contains control data. The cell loss priority bit indicates whether the cell can be
discarded. The header error correction block checks the first four octets of the header for errors.
ATM Protocol Layers
Like all modern protocols, ATM is a layered protocol, but greatly simplified compared to others. Figure 35-5 shows the logical layers of ATM, all of which fit in the first two layers of the OSI model. Three planes: control, user, and management span all layers. The control plane generates and manages signaling messages, the user plane manages data transfer, and the management plane handles overall operation of the protocol.
The physical layer is designed to operate over a variety of services such as DS-1, DS-3, SONET/SDH, fiber, twisted pair, or even radio. This layer packages cells according to the requirements of the physical medium. The media independent ATM layer multiplexes and demultiplexes cell streams onto the physical layer. The application fits on top of the ATM adaptation layer (AAL). The AAL allows ATM to statistically multiplex various traffic types. AAL is divided into two sublayers, the segmentation and reassembly (SAR) and the convergence. The SAR segments the user’s data stream on outbound traffic and reassembles it inbound. The convergence sublayer protocols are different for the various types of information such as voice, video, and data. The AAL supports five classes of traffic as illustrated in Figure 35-6:
-Constant bit rate (CBR) for connection-oriented traffic such as uncompressed voice and video. This class provides low latency, jitter, and cell loss.
-Real-time variable bit rate (RT-VBR) for connection-oriented bursty traffic that requires close synchronization between the source and destination. Examples are packet video and compressed voice.
-Non-real-time variable bit rate (NRT-VBR) for connection-oriented traffic that is not sensitive to latency and packet loss. Examples are bursty data such as frame relay and LAN-to-LAN traffic where the application can recover from irregularities.
-Unspecified bit rate (UBR), which is a best-effort service where the application, such as e-mail and file transfer, does not require QoS.
-Available bit rate (ABR) is whatever is left over. It is also a best-effort low-cost service that is used for services similar to ABR.
The solid block at the bottom of Figure 35-6 represents traffic the network is committed to handle, and which requires a constant amount of bandwidth. Above it is VBR traffic that the network is also committed to carry, but which varies in bandwidth because of the nature of the application. The bandwidth left over is available, and can be provided at a lower cost because it does not require a firm commitment from the carrier that it will be delivered within specified QoS parameters. UBR is a laissez-faire class of service providing best-effort delivery. It is less
complex to set up than the other service classes, and is therefore often what the carrier provides by default. It does not have the guaranteed QoS of CBR and VBR service, and cannot be depended on for these applications. Service providers may interpret the traffic classes differently, so it pays to understand the carrier’s definitions.
Corresponding to traffic classifications, the AAL is divided into four categories.
-AAL-1 is a connection-oriented service designed to meet CBR service requirements. It is intended for video, voice, and other CBR traffic. ATM transports CBR in a circuit-emulation mode. To preserve timing synchronization between endpoints, it must operate over a medium such as SONET/SDH that supports clocking.
-AAL-2 is for VBR applications such as compressed voice that depend on synchronization between endpoints, but do not have a constant data transmission speed. AAL-2 supports silence suppression, where AAL-1 transmits packets containing silence. AAL-2 supports both real-time and non-real-time traffic.
-AAL-3/4 supports both connection-oriented and connectionless data service. It is used by carriers to provide services such as SMDS.
-AAL-5 is for connection-oriented and connectionless data communications that do not require CBR or VBR stability, including services such as frame relay, LAN emulation (LANE), and multiprotocol over ATM (MPOA).
The AAL layer allows the network to provide different classes of service to meet the requirements of different types of traffic. AAL service differentiation is used only between the end systems and QoS is not based on the AAL designation of the cells. ATM switches can handle multiple sessions and classes of service simultaneously.
ATM Call Processing
When an ATM connection is set up the calling station asks for a connection to the called station. The calling station and the network negotiate bandwidth and QoS classifications, whereby the ATM network provides the QoS and the station promises not to exceed the requirements that were set up during the connection establishment. Traffic management takes care of providing the users with the QoS they requested, and enables the network to recover from congestion. When an
ATM endpoint connects to the network, it sends a traffic contract message that describes the data flow. The message contains such parameters as peak and average bandwidth requirements and burst size. The network uses traffic shaping to ensure that traffic fits within the bounds of the contract. ATM switches can enforce the contract by setting the cell-loss priority bit in the header for excess traffic. The switches can discard such traffic during congestion periods.
ATM endpoints set up calls with a signaling-request message to their serving switch. This setup message contains a call reference number, addresses of called and calling parties, traffic characteristics, and a QoS indicator. Signaling messages are sent over the signaling AAL, which assures their delivery. The signaling messages are based on the Q.931 format, which consists of a header and a variable number of message elements. The destination returns a call proceeding message that contains the same call reference number plus a VPI/VCI identifier. A series of setup and call proceeding messages are exchanged while the network determines such matters as whether the called party is willing to accept the call. Switches use the PNNI protocol to discover the topology and link characteristics of the network. When a change such as a link failure occurs, PNNI communicates the event to all switches.
Public networks use an addressing system of up to 15 digits following E.164 standards. Private networks use a 20-octet address modeled after OSI network service access point (NSAP) addressing. The ATM layer maps network addresses to ATM addresses.
VOICE OVER ATM (VoATM)
The CBR and VBR traffic classes are designed for voice, video, and other time-sensitive traffic over ATM, collectively known as VoATM. Circuit emulation service (CES) enables circuits to be connected across an ATM network using CBR PVCs. It is intended for use by non-ATM devices such as PBXs or video codecs that need controlled bandwidth, end-to-end delay, and jitter just as if the devices were connected by a private line. CES, usually used with AAL1, allows these variables to be specified at the time of call setup.
The AAL1 specification provides two modes of operation, structured and unstructured. Unstructured CES extends all channels of a T1/E1 across the network in a single VC. The network does not look into the underlying channels of the T1/E1, but reproduces the data stream across the network without modification. This provides the same degree of stability as it would if sent over SONET/SDH, but this method is not bandwidth efficient because if some of the channels are idle, they are sent anyway. Structured CES, intended to emulate fractional T1/E1, splits the T1/E1 into multiple DS-0s, and transmits each one with a different VC. If the block size is greater than one octet, AAL1 uses an internal pointer to delineate the block size. This enables the ATM network to minimize bandwidth by using only the timeslots that are actually needed and allows the endpoints to be different, which improves utilization. CES services can be set up as either synchronous, which assumes that each end is individually clocked from a reference clock, or asynchronous, in which clocking information is transported in ATM cells.
VoATM Signaling and Call Setup
ATM uses two types of signaling for CES, in-band channel-associated signaling (CAS), and out-of-band common-channel signaling (CCS). In the CCS model, the network transports the signaling transparently. A PVC carries the signaling from end to end and the stations select a PVC to carry the voice channel. The network itself is not involved in the signaling—it merely transports the signals from the endpoints over the PVC. This method, sometimes called the transport model, would be used when the network provides a dedicated connection between two
devices such as PBXs that signal each other using internal protocols. This alternative requires structured CES.
In the CAS method, also known as the translate model, the network interprets the signal. When a station requests service, the ATM network sets up an SVC with the requested QoS to the terminating endpoint. When a call setup request is received, the source ATM switch determines the route through the network based on the QoS requested. It uses the PNNI protocol to send a setup request through the network to determine whether each switch has the resources to support the connection. Once the connection is set up, it has the same degree of stability as a circuit-switched connection.
LAN EMULATION
Since ATM is a connection-oriented protocol, it is far from a seamless service for linking LANs with their variable-length frames and connectionless format. To serve the market for LAN-to-LAN interconnection, ATM uses LANE to make the ATM channel look like a bridge to the LAN protocols. LANE sets up SVCs across ATM networks to serve LAN clients that exist in each host. Data transmitted over ATM is encapsulated into cells, but ATM does not inspect cell contents. Therefore, it must have a method of mapping the underlying addresses to ATM addresses. LANE’s function is to map MAC addresses to ATM addresses, encapsulate IP datagrams into ATM cells, and deliver them across the network. The LANE protocol defines the operation of an ELAN (emulated LAN), which is effectively a VLAN implemented across an ATM network. Multiple ELANs can be defined across the same network, but LANE operates at layer 2, so it is confined to creating bridged connections. If multiple ELANS need to communicate, external routers are required.
A LANE network, as illustrated in Figure 35-7, consists of a collection of servers that enable ATM to support functions that it lacks, such as broadcast capability. The figure shows separate servers, but the functions can be integrated into a single box. A LAN emulation client (LEC) in each host is the interface between the LAN and the ATM network. It can run in any device, such as an ATM edge switch, that has an ATM interface on one side and Ethernet on the other. Each
ELAN acts like a broadcast domain.
When a LEC joins an ELAN, it learns which ELAN it belongs to by communicating with a LAN emulation configuration server (LECS) to identify its LAN emulation server (LES). The LECS accepts requests from clients and informs them of the type of LAN being emulated and which LES to use. The LES is a central database that correlates all MAC addresses on the ELAN with their ATM addresses. Its function is similar to ARP on an IP network. If the LES cannot resolve the address, it uses the broadcast and unknown server (BUS) to broadcast frames to all of the LECs in a broadcast group. When the address is resolved, LEC sends the ATM address to the client.
Multiprotocol Over ATM
LANE operates at the datalink layer, and therefore enables ATM to operate as a bridge between LANs. The ATM Forum’s MPOA operates at the network layer to enable routed networks to take advantage of ATM’s low latency and scalability without the need for external routers. The objective of MPOA is to identify a flow between two network endpoints and affix a label to it that can be directly tied to an ATM virtual circuit. The network nodes then can forward packets based on labels rather than on IP address. The labels can be VCIs or frame relay DLCIs. Each flow is related to a specific path through the network that makes IP behave as if it were connection oriented. In effect, MPOA assigns a flow between endpoints to a tunnel through the network. Since it reduces router hops and processing, the routers can handle significantly more throughput and networks can behave in a more predictable manner.
MPOA assigns two functions to the router. The host functional group deals with direct communication with the end user devices. The edge device functional group deals with functions such as virtual circuit mapping, route determination, and packet forwarding. A protocol known as Next Hop Resolution Protocol (NHRP) enables routers to determine IP-to-ATM address mappings so the edge device can establish a shortcut path to the destination. This limits router processing and improves performance.
MPOA consists of three components:
-Route servers, which perform the routing function for hosts and edge devices. The route server appears to other routers in the network to be a normal router, but it connects the session to an ATM virtual circuit. The route server can be embedded in the ATM switch.
-Edge devices connect traditional LANs to the MPOA network. They can forward packets between LAN and ATM interfaces.
-ATM hosts are MPOA-enhanced LANE hosts that are directly connected to the MPOA network.
MPOA is in many ways similar to LANE and requires LANE for its operation. At startup time, devices contact a configuration server that knows which devices are assigned to virtual networks. As devices are turned on to connect with the network, they register themselves with their servers so they can acquire address information and begin communicating.
ATM APPLICATION ISSUES
Although the predominate use of ATM is for carrier backbones, enterprise networks use it as a backbone. Its major advantage aside from its raw capacity is carrying multiple types of traffic. Frame relay and IP can carry multimedia traffic, but they lack the traffic management and signaling capability of dedicated circuits, which are ATM’s strong points. Routers can be configured with an ATM backbone, a configuration that will become more prominent as voice and video are fed over high-speed routers. This section discusses some considerations in implementing ATM in an enterprise network.
The network design begins with collecting detailed information about current applications and traffic flows and how they will likely change in the future. If voice and video traffic will be added to the network, data from existing switches should be collected and organized into a matrix of originating and terminating points and traffic volumes between them. Analyze each application according to its sensitivity to delay, jitter, and packet loss as a way of determining ATM service class requirements.
Evaluating ATM Services
In choosing ATM services, here are some considerations:
-Will the network use private switches and circuits, a public ATM carrier, or a combination of both?
-Does the carrier offer both PVC and SVC? Is the charging based on usage, bandwidth, connect time, or what?
-Does the carrier support all of the traffic classes the application demands?
-What is the carrier’s network topology? Does it have enough switches to ensure reliability? Where are its POPs and how do they meet with your network requirements?
-Does the carrier offer managed services for subscriber access devices and routers?
-Do the published SLAs with respect to end-to-end delay, delay variation, cell loss ratio, and other such QoS measurements meet the requirements of the application? How does the carrier measure the service, and what kinds of quality reports does the user receive?
-What are the carrier’s traffic shaping and policing policies?
-Is routing in the network core static or dynamic?
-Can the carrier provide the necessary bandwidth? Can it handle peaks in compliance with the service agreement?
-What are the bandwidth increments? Does the carrier offer inverse multiplexing to increase the bandwidth?
-Can you internetwork between ATM and frame relay?
-What kinds of network management reports does the carrier provide?
- Does the carrier support MPOA? PNNI?
Addressing procedures and policies are also important to consider when selecting a carrier. Unlike the voice network, number portability is not implemented with ATM. If the network is completely private and will never be connected to other networks, then a private addressing plan can be used. Otherwise, it is necessary to obtain addresses that fit within the public numbering plan. Some considerations, which are not unlike the kind of problems you have with DID numbers, include these:
-Will the carrier give you an address base large enough to support anticipated growth?
-Can you get a contiguous range of numbers?
-If you change carriers can you retain the numbers?
-What happens if you have an address conflict with an adjoining network?
Evaluating ATM Equipment
Queue management is one of the primary factors distinguishing products on the market. Some products may support a limited number of queues, which will not be adequate for end-to-end QoS. Determine whether a switch is blocking or nonblocking—nonblocking meaning that the switching fabric can handle the capacity of all the input ports without cell loss. Look also at scalability and upgradeability. Is the switch capable of keeping pace with changes in ATM technology? What kind of switching architecture does it use—shared memory or self-routing? How does it handle output link contention?
-Look at bandwidth scalability. For example, can you upgrade a DS-3 link to an OC-3 link, and does this require rebooting the switch? In other words, can you add bandwidth without disrupting other services?
-Ease of setup is important and one of the distinguishing factors among products. Determine whether the equipment can be set up for all classes of operation from UBR to CBR.
-Interoperability will be important if the equipment is not all furnished by the same manufacturer. Insist on demonstrated interoperability of all required features from the network interface cards through the network of switches.
-Determine the degree of fault tolerance the equipment has. Are critical components such as processors and power supplies redundant? Are modules hot swappable?
-Does the product support integrated link management interface (ILMI)? ILMI enables equipment to monitor link health and configure addresses.
-How do the switches handle congestion? If congestion occurs, can the switch distinguish between high-priority traffic such as video and give it priority over traffic such as data that can be discarded with less effect on service?
-What is the process for routing around trouble? Does it implement PNNI to route around failures or congestion?
-What management tools does the carrier provide? Is the equipment SNMP compatible? Does it support ATM-RMON MIBs?
-Does the equipment use ATM Forum standards in all cases or are some standards proprietary?
No comments:
Post a Comment