Saturday

Cable Interference and Noise Issues

Electromagnetic flux is a potential problem that can disrupt network communications wherever there are active electrical and electronic devices. The selection of the right cabling and its network routing design is important to reduce communications interference problems. All network components, including the connectors and patch panels, must be designed to satisfactorily perform in the presence of external noise. Cable routing should conform to the manufacturer’s recommendations and always avoid potential interference sources. Likely office building sources of EMI are lift motors (elevators), automatic doors, and air-conditioning units. The older the equipment, the more likely it will produce EMI. Closed metal conduits and ducting for the cabling system will provide extra protection against EMI sources that cannot be corrected or avoided. Balanced transmission over UTP cable offers strong protection against external noise. In EMI-sensitive or hostile environments, the only solution may be optical fiber cable that is immune to external noise.

There are regulations specified by the FCC (part 68 and part 15 subpart) that cover telecommunications network electromagnetic compatibility (EMC) with other electronic devices. Network system installers and users are responsible for conforming to EMC guidelines. Installers must ensure that cable specifications for routing and ducting eliminate interference problems. Some manufacturers provide warranties on the EMC performance of certified installations using their cabling.

In addition to the potential for interference from external electrical and electronic source devices, the active pairs in a multipair cable can interfere with each other. Interference between cable pairs is known as crosstalk. Crosstalk measurements may be performed with two methods: pair-to-pair and PowerSum. The pair-to-pair method measures only the maximum interference caused by any other single active cable pair. Near end cross talk (NEXT), the pair-to-pair measurement metric, is defined as the signal coupled from one pair to another in a UTP cable. It is called NEXT because it measures the crosstalk at the end where one pair is transmitting (and the transmitted signal is largest and, hence, causes the most crosstalk). Crosstalk is minimized by the twists in the cable, with different twist rates causing each pair to act as antennas sensitive to different frequencies so that signals are not picked up from neighboring pairs. Keeping the twists as close as possible to the terminations minimizes crosstalk. Far end crosstalk (FEXT) measures the effect of signal coupling from one pair to another over the entire length of the cable, and it is measured at the far end.

Another frequently cited measurement associated with crosstalk is the attenuation to crosstalk (ACR) ratio. Attenuation is the reduction in signal strength due to loss in the cable. ACR measures how much “headroom” the signal should have at the receiver. It is important that the signal strength at the receiver end be high enough for reception by the network hub/switch to pass through to workstation nodes or other hubs/switches. Ethernet LANs send very high-speed signals through the cable, and the attenuation varies with the frequency of the signal. Attenuation tests are performed at several wavelengths, as specified in the 568 standards. The test requires a tester at each cable end, one to send and one to receive. The loss between the ends is calculated, recorded, and compared with pass/fail criteria for UTP cable at Category 3, 4, and 5 frequencies.

Performance losses can be greater than indicated by pair-to-pair measurement if there are several active pairs in a multipair cable strand. For this reason, the preferred method of measuring crosstalk is known as PowerSum. It is based on the measurements taken when all pairs in a multipair cable are active. This is the more realistic crosstalk measurement for Fast Ethernet and Gigabit Ethernet LANs, where all pairs are used to carry signals, often simultaneously. PowerSum is the recommended method to use for cables with more than four wires.

Wednesday

Cabling System Fundamentals

A structured cabling architecture design is intended to accommodate telecommunications technology changes with minimal impact on any of the other cabling subsystems, such as, electrical cabling. The target life cycle of an average cabling installation is up to 20 years. It is expected that a few generations of telecommunications systems will be installed and replaced or upgraded. Another planning assumption is that networking and bandwidth requirements will certainly increase during the life cycle of the cabling system. The following are key factors used to specify networks and cabling, as identified by Avaya in its SYSTIMAX CSC guidebook:

  • Usage patterns, including combined size and duration of peak loads for all applications

  • Expected increase in bandwidth demands

  • The number of users and anticipated changes in that number

  • Location of users and maximum distances between them

  • The likely rate of change in users’ locations (churn)

  • Connectivity with current and future devices and software

  • Space available for cable runs

  • Total cost of ownership

  • Regulations and safety requirements

  • Importance of protection against loss of service and data theft

PBX systems traditionally have been based on a star network topology. A star network topology includes many point-to-point links radiating from central equipment. The early LAN topologies were based on ring-and-bus network designs. A ring network topology has a continuous transmission loop that interconnects every device. The most familiar example of a ring network topology is the IBM token ring LAN. A bus network topology is a communications link that connects devices along the length of a cable. The original Ethernet LAN was based on a bus network topology.

Today’s dominant LAN technology is based on Ethernet standards. The logical topology of an Ethernet LAN is a bus topology, but the physical topology of the network is a star. Ethernet workstations that connect to an Ethernet hub or switch communicate over a high-speed bus housed in a hub or switch, but these network nodes are connected in a clustered star network topology. The star topology favored by PBX systems and adapted by Ethernet LANs is now the accepted communications system network topology.

The first Ethernet LAN installations were based on coaxial cable used for the transmission medium. During the mid-1980s the cabling used by PBX systems, known as unshielded twisted pair (UTP), was adapted for Ethernet LANs. Telephony UTP cabling was classified by IBM’s cabling system specifications as Category 3 and was used for 10Base-T Ethernet LANs operating at 10 Mbps. A 10Base-T Ethernet LAN used two pairs of Category 3 UTP cabling. A 100Base-T4 Ethernet LAN used four-pair Category 3 UTP cabling. A 100-Mbps Fast Ethernet, also known as 100Base-TX, used two-pair Category 5 UTP cabling. The 1000-Mbps (1 Gbps) Ethernet, 1000BASE-T, uses four-pair Category 5 UTP cabling. The 1000Base-TX, a lower cost alternative to 1000Base-T, uses the recently introduced Category 6 UTP cabling. PBX system telephony requirements can be satisfied with any of these UTP cabling types, making possible a single network cabling system infrastructure for voice and data communications applications.

In the SYSTIMAX SCS guidebook, Avaya lists the following considerations for choosing the type of customer network cabling:

  • Maximum distance between network hubs and nodes

  • Space available in ducting and floor/ceiling cavities

  • The levels of electromagnetic interference (EMI)

  • Likely changes in equipment served by the system and the way it is used

  • Level of resilience required

  • The required life span of the network

  • Restrictions on cable routing that dictate cable bend radius

  • Existing cable installations with potential for reuse

For the past two decades, most customers have used or installed two different cabling systems for telephony and data LAN applications. The evolution of the PBX system to an IP telephony platform will allow the large installed base of customers with installed circuit switched PBX systems to slowly phase out an infrastructure with two cabling systems and allow customers who are designing an entirely new converged voice/data network the opportunity to install a single cabling system. PBX systems installed before 1990 were implemented with Category 3 UTP, but more recent installations may have been based on Category 5 UTP, the same wiring used for data LANs. A newly installed communications system installation likely would be based on a generic cabling infrastructure using Category 5 UTP to provide for future needs.

A generic cabling system is a structured telecommunications cabling system capable of supporting a wide range of customer applications. Generic cabling can be installed before the definition of required applications because application-specific hardware (telephones, computers, etc.) is not part of the structured cabling design. Generic cabling can be enhanced through the use of flood wiring, which is the installation of sufficient cabling and telecommunications outlets in a work area to maximize the flexibility of the location for devices connected to the network. Many customers are currently installing four or six telecommunications outlets per work area, although the recommended minimum is two.

Sunday

PBX Cabling Guidelines

Telephony wiring dates back 125 years ago, to the days when Alexander Graham Bell was tinkering with the first telephone. Telephones traditionally have used loop current for voice communications and signaling transmission. For many years single-pair (two-wire) cabling had supported telephones working behind a PBX system, but system equipment innovations, beginning with the introduction of digital switching and stored program call control, forced changes in the cabling infrastructure during the late 1970s. The first generation of proprietary PBX telephones, first electronic and then digital, required multiple wiring pairs to support the more advanced features and functions available with the new technology. At the same time, the early data LANs required a wiring infrastructure of their own, based on coaxial cable. As customer premises voice networks and data networks evolved in the mid-1980s, issues such as a common infrastructure and increasing transmission bandwidth requirements needed to be addressed. The existing telephony wiring system, fine for voice but inadequate for data, needed a major overhaul.

In 1985, two standards committees began working on specifications for a generic telecommunications cabling system to support a mix of communications media (voice, data, video) in a multivendor environment. The TIA and the Electronic Industries Association (EIA) formed a joint committee known as the EIA/TIA 41.8 Committee. After 6 years of work, the TIA/EIA 568 standard was issued. TIA/EIA 568 is more formally known as the Commercial Building Cabling Standard and outlines specifications for a generic telecommunications cabling system. The American National Standards Institute (ANSI) also adapted this standard, so it is sometimes referred to as ANSI/TIA/EIA 568.

There is a corresponding series of specifications known as ANSI/TIA/EIA 569: Commercial Building Standard for Telecommunications Pathways and Spaces. The purpose of ANSI/TIA/EIA 569 is to standardize design and construction practices within and between buildings that support telecommunications equipment and transmission media. The standards are outlined for rooms or areas and pathways into and through areas where telecommunications media and equipment are installed. To simplify the implementation and administration of the cabling infrastructure, another series of specifications were developed, ANSI/TIA/EIA 606: The Administration Standard for the Telecommunications Infrastructure of Commercial Building.

In addition to the standards specified by the ANSI/TIA/EIA recommendations, the International Standards Organization (ISO) defined a generic cabling system recommendation known as ISO/IEC IS 11801. The ISO standard is intended for global usage and is broader in scope than the ANS/TIA/EIA standards for the North American market. The European version of ANSI/TIA/EIA standard is EN 50173 and is more similar to 568 than to the ISO standard.

Wednesday

TIA IP Telephony QoS Recommendations

The TIA has done extensive research and analysis to understand IP telephony voice quality. It used the ITU-T recommendation G.107 E-model to develop its own recommendations for optimizing IP telephony QoS levels, categorizing them by sources of potential speech impairment: delay, speech compression, packet loss, tandeming, and loss plan. The E-model consists of several models that relate specific speech impairment factors and their interactions with end-to-end performance.

The specific recommendations, as summarized in TIA/EIA/TSB116, are:

  • Delay recommendation 1—Use G.711 end to end because it has the lowest Ie value (equipment impair value) and therefore allows more delay for a given voice quality level.

  • Delay recommendation 2—Minimize the speech frame size and the number of frames per packet.

  • Delay recommendation 3—Actively minimize jitter buffer delay.

  • Delay recommendation 4—Actively minimize one-way delay.

  • Delay recommendation 5—Accept the [TIA’s] E-model results, which permit longer delays for low Ie codecs, like G.711, for a given R value (transmission rating factor).

  • Delay recommendation 6—Use priority scheduling for voice-class traffic, RTP header compression, and data packet fragmentation on slow links to minimize the contribution of this variable delay source.

  • Delay recommendation 7—Avoid using slow serial links.

  • Speech compression recommendation 1—Use G.711 unless the link speed demands compression.

  • Speech compression recommendation 2—Speech compression codecs for wireless networks and packet networks must be rationalized to minimize transcoding issues.

  • Packet loss recommendation 1—Keep (random) packet loss well below 1 percent.

  • Packet loss recommendation 2—Use packet loss concealment (PLC) with G.711.

  • Packet loss recommendation 3—If other codecs are used, then use codecs that have built-in or add-on PLCs.

  • Packet loss recommendation 4—New PLCs should be optmized for less than 1 percent of (random) packet loss.

  • Transcoding recommendation 1—Avoid transcoding where possible.

  • Transcoding recommendation 2—For interoperability, IP gateways must support wireless codecs or IP must implement unified transcoder-free operations with wireless.

  • Tandeming recommendation 1—Avoid asynchronous tandeming, if possible.

  • Tandeming recommendation 2—Synchronous tandeming of G.726 is generally permissible. Impairment depends on delay, so long-delay digital circuit multiplication equipment (DCME) equipment should be avoided.

  • Loss Plan recommendation 1—Use TIA/EIA/TSB122-A, the voice gateway loss and level plan.

Following the Cisco Systems and TIA recommendations and guidelines may prove to be a difficult task if aging network infrastructure is installed that cannot support most, if not all, of these QoS control mechanisms. It is obvious from the material covered in this chapter that a close working relationship between voice and data communications personnel is required to successfully implement and operate an IP-PBX systems. If IP telephony QoS is not comparable to the experience station users have grown accustomed to with their circuit switched PBX system, the new technology may be rejected as an enterprise communications solution, even if it offers potential cost savings and the benefit of new applications support. A green field location offers the best bet for a large IP-PBX system installation because it is easier to begin from scratch than to attempt a network upgrade while continuing to support ongoing communications operations.

The DiffServ model divides traffic into a small numbers of classes. One way to deploy DiffServ is simply to divide traffic into two classes. Such an approach makes good sense. If you consider the difficulty that network operators experience just trying to keep a best-effort network running smoothly, it is logical to add QoS capabilities to the network in small increments.

Suppose that a network operator has decided to enhance a network by adding just one new service class, designated as “premium.” Clearly, the operator needs some way to distinguish premium (high-priority) packets from best-effort (lower-priority) packets. Setting a bit in the packet header as a one could indicate that the packet is a premium packet; if its a zero, the packet receives best-effort treatment. With this in mind, two questions arise:

  • Where is this bit set and under what circumstances?

  • What does a router do differently when it sees a packet with the bit set?

A common approach is to set the bit at an administrative boundary, such as at the edge of an Internet service provider’s (ISP’s) network for some specified subset of customer traffic. Another logical place would be in a VoIP gateway, which could set the bit only on VoIP packets.

What do the routers that encounter marked packets do with them? Here again there are many answers. The DiffServ working group of the IETF has standardized a set of router behaviors to be applied to marked packets, which are the PHBs. PHBs define the behavior of individual routers rather than of end-to-end services. Because there is more than one new behavior, there is a need for more than one bit in the packet header to tell the routers which behavior to apply. The IETF has decided to take the ToS byte from the IP header, which has not been widely used in a standard way, and redefine it. Six bits of this byte have been allocated for DSCPs. Each DSCP is a 6-bit value that identifies a particular PHB to be applied to a packet. Current releases of Cisco IOS software use only 3 bits of the ToS byte for DiffServ support. This is adequate for most applications, allowing up to eight classes of traffic. Full 6-bit DSCP support is under development.

One of the simplest PHBs, and one that is a good match for VoIP, is EF. Packets marked for EF treatment should be forwarded with minimal delay and loss at each hop. The only way that a router can guarantee this to all EF packets is if the arrival rate of EF packets at the router is limited strictly to less than the rate at which the router can forward EF packets. For example, a router with a 256-kbps interface needs to have an arrival rate of EF packets destined for an interface that is less than 256 kbps. In fact, the rate must be significantly below 256 kbps to deal with bursts of arriving traffic and to ensure that the router has some ability to send other packets.

The rate limiting of EF packets may be achieved by configuring the devices that set the EF mark (e.g., VoIP gateways) to limit the maximum arrival rate of EF packets into the network. A simple, albeit conservative, approach would be to ensure that the sum of the rates of all EF packets entering the network is less than the bandwidth of the slowest link in the domain. This would ensure that, even in the worst case, where all EF packets converge on the slowest link, the link is not overloaded and the correct behavior results.

In fact, the need to limit the arrival rate of EF packets at a bottleneck link, especially when the topology of the network is complex, turns out to be one of the greatest challenges of using only DiffServ to meet the needs of VoIP. For this reason, an approach based on integrated service and the RSVP is appropriate in those situations where it is not possible to guarantee that the offered load of voice traffic will always be significantly less than link capacity for all bottleneck links.

Sunday

QoS Controls

QoS controls can be segmented into several categories: traffic authorization, traffic modification, and traffic adaptation. Traffic authorization controls a station user’s access to resources within a domain of control. Traffic authorization methods include admission control, eligibility control, and application control. These are forms of restriction that allow traffic only if a station user provides a password, the station user is on an access list, or the station user is permitted to do so by a policy management server. Traffic modification controls the type of traffic on the network through classification (segregating traffic into different classes), shaping (smoothing out traffic peaks to avoid overload situations), or policing (dropping traffic that doesn’t respect policies). Traffic adaptation methods include protocol control, path control, user behavior, congestion avoidance, and congestion management.

There are several commonly used QoS mechanisms that are supported by most of current IP-PBX systems. The two most common class of service (CoS) mechanisms are IEEE 802.1p/Q tagging (Layer 2) and type of service (ToS) prioritization (Layer 3). Both provide prioritization but have their limitations. A better mechanism, developed by the IEEE’s IEFT, is differentiated services (DiffServ), an advanced architecture of ToS.

802.1p/Q

The IEEE 802.1p standard for QoS prioritization is a specification defining 3 bits within the IEEE 802.1Q field in the MAC header (OSI Layer 2). The 802.1Q was designed originally to support VLAN operability and then extended to support traffic priorities. IEEE 802.1p adds 16 bits to the Layer 2 header, including 3 bits that can be used to classify priority (the tag). Frames with 802.1p implementation are called tagged frames. The standard specifies six different priorities, which do not offer extensive policy-based service levels. Typically, a NIC card in a LAN system sets the bits according to its needs, and Layer 2 switches use this information to direct the forwarding process.

If multiple LANs are interconnected by routers (Layer 3 switches), then the Layer 2 bits must be used to drive Layer 3 QoS mechanisms. The 802.1p/Q mechanism does not operate on an end-to-end basis in an internetwork but does provide a simple method of defining and signaling an end system’s requirements within the entire network environment. The Layer 2 header is read only at the switch level—the boundary routers, where traffic congestion occurs—and cannot take advantage of prioritization based on 802.1p unless it is mapped to a Layer 3 prioritization scheme. Even though prioritization is achieved within the switched network, it is lost at the LAN/WAN boundary routers.

Another potential problem is installing a LAN switch supporting 802.1p in a network with non-802.1p switches, which could lead to instability: older switches would misinterpret the unexpected 16 bits specified by the standard. Implementing 802.1p in older networks could require a costly upgrade of all switches.

IEE standard 802.1D is also supported by some IP-PBX systems for traffic prioritization. IEEE 802.1D extends the concept of MAC bridging to define additional capabilities of bridged LANS: to expedite traffic capabilities in support of the transmission of time-critical information in a LAN environment and provide filtering services that support the dynamic use of Group MAC addresses in a LAN environment. IEEE 802.1D Spanning Tree Bridge Protocol is a widely used bridge standard for interconnecting the family of IEEE 802 standard LANs. In this standard, a shortest path spanning tree with respect to a predetermined bridge, known as a root bridge, is used to interconnect LANs to form an extended LAN. The spanning tree defines a unique path between a pair of LANs, but this path may not be a shortest path. Moreover, because only one spanning tree is used, some bridges and some ports may not be used at all.

ToS

ToS was first defined in the early 1980s but largely unused until recent IP traffic bottlenecks at the boundary routers required prioritization for better service levels. The IPv4 protocol always contained an 8-bit field, called the ToS field, originally intended for use in packet prioritization. The most recent version, called IP Precedence, is a control mechanism that provides end-to-end control of QoS settings. The ToS octet in the Ipv6 header includes three precedence bits defining eight different priority levels ranging from highest priority for network control packets to lowest priority for routine traffic. Three of the ToS bits are used to flag sensitivity to delay, throughput, and packet loss. Many boundary routers and ToS-enabled Layer 3 switches read the precedence bits and map them to forwarding and drop behaviors. Devices use IP Precedence bits, if set, to help with queuing management.

Differentiated Services (DiffServ)

An evolving IETF QoS control mechanisms is known as DiffServ. DiffServ will not be based on priority, application, or flow, but on the possible forwarding behaviors of packets, called per-hop behaviors (PHBs). DiffServ is rule based and offers a control mechanism for policy-based network management. The DiffServ framework is based on network policies because different kinds of traffic can be marked for different kinds of forwarding. Resources can then be allocated according to the marking and the policies. The IETF Working Group is completing a series of standards that redefine Ipv6 ToS bytes, renamed the Differentiated Services Code Point (DSCP). The new byte indicates the level of service desired and maps the packet to a particular forwarding behavior (PHB) for processing by a DiffServe-compliant router. The PHB provides a particular service level (bandwidth, queuing, and dropping decisions) in accordance with network policy.

Under DiffServ, mission-critical packets could be encoded with a DSCP that indicates a high bandwidth, 0-frame–loss routing path. The DiffServ-compliant boundary router would then make route selections and forward the packets accordingly, as defined by network policy and the PHBs the network supports. The highest-class traffic would get preferential treatment in queuing and bandwidth, and the lower class packets would be relegated to slower service.

The DSCP is 6 bits wide, allowing coding for up to 64 different forwarding behaviors. The DSCP replaces the older ToS bits, and it retains backward compatibility with the 3 precedence bits so that non–DS-compliant, ToS-enabled devices will not conflict with the DSCP mapping.

There are currently two standard PHBs, expedited forwarding (EF) and assured forwarding (AF). EF has one codepoint (DiffServ value), minimizes delay and jitter, and provides the highest level of aggregate QoS. Traffic that exceeds the traffic profile is discarded. AF has four service classes and three drop-precedences for each service class (12 total codepoints). Excess traffic is not delivered with the same level of probability as traffic within the defined profile, and it may or may not be dropped. DiffServ assumes the existence of service level agreement (SLA) between networks sharing a border. The SLA establishes policy criteria and defines the traffic profile.

Other QoS control mechanisms include RSVP, subnet bandwidth management (SBM), and multiprotocol label switching (MPLS). RSVP was used by the first generation of client/server IP-PBXs but is deemed too complex, with too much overhead for many parts of the network. SBM is concerned with layer protocols above Layer 2 for internetworking between multiple LANs. MPLS is used primarily for private network routing applications, with limited appeal for premises-only communications applications.

Another approach to IP telephony is the use of VLANs. VLANs can provide more efficient use of LAN bandwidth, are used to distribute traffic loads, and are scalable to support high-performance requirements at a microsegment level. Traffic types, such as real-time voice and delay-insensitive data, can be segmented. IEEE 802.1Q is used as a VLAN packet tagging standard.

Thursday

Factors Affecting QoS: Packet Loss and Latency

Packet Loss

The two major problem areas that affect IP telephony QoS are packet loss and delay. The two QoS impairment factors are sometimes interrelated.

Packet loss causes voice clipping and skips, often resulting in choppy and sometimes unintelligible speech. Voice packets can be dropped if the network quality is poor, the network is congested, or there is too much variable delay in the network. Poor network quality can lead to sessions frequently going out of service due to lost physical or logical connections. To avoid lost or late packets, it is necessary to engineer the IP telephony network to minimize situations that cause the problem, but even the best-engineered system will not stop congestion-induced packet loss and delay. To combat this problem, it is recommended that a buffer be used on the receiving end of a connection. Buffer length must be kept to a minimum because it contributes to end-to-end network delay. Dynamic receive buffers that increase or decrease in size can be used to handle late packets during periods of congestion and avoid unnecessary delays when traffic is light or moderate.

Packet problems that occur at the sending end of a connection can be handled by methods such as interleaving and forward error correction (FEC). Interleaving is the resequencing of speech frames before they are packeted. For example, if each packet has two frames, the first packet contains frames 1 and 3 and the second packet contains frames 2 and 4. If a packet is lost, the missing speech frames will be nonconsecutive and the gaps will be less noticeable to the receiving party. FEC is a method that copies information from one packet to the next packet in the sequence. This allows the copied data to be used in the event a packet is lost or late.

Different methods are used at the receiving end of the connection. Unlike a circuit switched network, a packet switched network breaks communications signals into small samples, or packets of information. Each packet has a unique header that identifies packet destination and provides information on reconstruction when the packet arrives. Packets travel independently across the LAN/WAN and can travel by different routes during a single call. Packets can be lost for two primary reasons: dead-end routes and network congestion. Network congestion can lead to packet drops and variable packet delays. Voice packet drops from network congestion are usually caused by full transmit buffers on the egress interfaces somewhere in the network. The packet is purposely dropped to manage congested links. As links or connections approach 100 percent use, the queues servicing those connections become full. When a queue is full, new packets attempting to enter the queue are discarded. This can occur on an Ethernet switch or IP network router. Network congestion is typically sporadic, and delays from congestion tend to be variable in nature. Egress interface queue wait times or large serialization delays cause variable delays of this type.

DSP elements in most current voice codecs can correct for up to 30 milliseconds of lost voice. If the voice payload sample is no greater than this loss time, the correction algorithm is effective, if only a single packet can be lost during any given time. There are several methods that can compensate for lost or long-delayed packets. It is not practical to search for a lost packet to try to retrieve it. A preferred option is to conceal packet loss by replacing lost packets with something similar. One approach is to replay the last ordered packet in place of the lost one. This is a simple solution that is acceptable for rare packet loss, but a more complex solution is required for situations of frequent packet loss.

Several techniques are available for replacing a lost packet. One technique is to estimate the information that would have been in the packet. This concealment method generates synthetic speech to cover missing data. The concealment technique should have spectral characteristics similar to those of the speaker. This is relatively easy for a CELP-type codec such as G.729A because the speaker’s voice signals are modeled during the encoding process. It is a more difficult process if a waveform codec such as G.711 is used, because the amplitude of the waveform is coded rather than making assumptions about how the sound was produced. G.711 codec packet loss concealment requires more complex processing algorithms and greater memory requirements and adds to system delay. A waveform codec, such as G.711, compensates for this; it can rapidly recover from packet loss because the first speech sample in the first good packet restores the speech to the original, whereas CELP-based codecs require a few frames to catch up.

The concealment process requires the receiver codec to store a copy of the synthetic packet in a circular history buffer that calculates the current pitch and waveform characteristics. With the first bad packet, the contents of the buffer are used to generate a synthetic replacement signal for the duration of the concealment. When two consecutive frames are lost, repeating a single pitch can result in harmonic artifacts, or beeps, that are noticeable when the erasure lands on unvoiced speech sounds, such as s or f, or rapid transitions, such as the stops p, k, and d. Concealment algorithms often increase the number of pitch periods used to create replacement signals when multiple packets are lost. This results in a variation of the signal and creates more realistic synthetic speech.

There must be a smooth transition between synthesized and real speech signals. The first good packet after an erasure needs to be merged smoothly into the synthetic signal. This is done by mixing synthesized speech from the buffer with the real signal for a short time after the erasure period.

Packet loss can become noticeable when a few percentages of the packets are dropped or delayed, and it begins to seriously affect QoS when the percentage of the lost packets exceeds a certain threshold (roughly 5 percent of the packets). Major problems also occur when packet losses are grouped together in large packet bursts. The methods for dealing with packet loss must be balanced against adding delay packet transport between connected parties.

Latency

Delay, commonly referred to as latency, is the time delay incurred in speech by the IP telephony system. It is usually measured in milliseconds from the time a station user begins to speak until the listener actually hears speech. One-way latency is known as mouth-to-ear latency. Round-trip latency is the sum of the two one-way latencies comprising a voice call. Round-trip latency in a circuit switched PBX system takes less than a few milliseconds; PSTN round-trip latency is usually tens of milliseconds but almost always less than 150 milliseconds. Based on formal Mean Opinion Score (MOS) tests, latency at or under 150 milliseconds is not noticeable to most people. Latency up to 150 milliseconds receives good to excellent MOS scores ranging between 4 and 5 (1–5 scale) and provides for a satisfactory IP telephony QoS experience. One hundred fifty milliseconds is specified in the ITU-T G.114 recommendation as the maximum desired one-way latency to achieve high-quality voice. Switched network latency above 250 milliseconds, more common for international calls, becomes noticeable and receives fair MOS scores. Latency above 500 milliseconds is annoying and deemed unsatisfactory for conducting an acceptable conversation.

Latency in an IP telephony network is incurred at several nodal points across the voice call path, including the IP telephony gateways at the transmitting and receiving ends of a conversation. Latency is cumulative, and any latency introduced by any component in an IP telephony system will directly affect the total latency experienced by the station users.

The gateway network interface for an IP peripheral voice terminal may be an integrated component of the telephone instrument or an external device, such as a desktop IP adapter module, or embedded on a port circuit interface card housed in a PBX port carrier. The network interface in a gateway includes any hardware or software that connects the gateway to the telephone system or network. The typical network interface frames and converts digitized audio PCM data streams into the internal PCM bus for transport across a DSP. There is usually very little latency induced in this process, with typical maximums well below 1 millisecond. The DSP function is more complex because it involves compression or decompression of speech, tone detection, silence detection, tone generation, echo cancellation, and generation of “comfort” noise. The entire DSP mechanism is known collectively as vocoding.

DSP operations depend on processing entire frames of data at one time. The side effect of processing data in frames is that none of the data can be processed until the frame is completely full. Digitized speech arrives at a fixed rate of 8,000 samples per second, and the size of the frame processing the data will directly affect the amount of latency. A 100-sample frame would take 12.5 milliseconds to fill, and a 1000-sample frame would take 125 milliseconds to fill. Deciding on the frame size is a compromise: the larger the frame, the greater the DSP efficiency, but with that comes greater latency. Each standard voice coding method uses a standard frame size. The maximum latency incurred by the framing process depends directly on the selection of vocoder.

A G.711 voice codec can be programmed for frame size specifications, and very small frame duration delays can be used. A typical G.711 programmed frame duration is 0.75 milliseconds. A G.723.1 voice codec results in far greater frame delay than a G.729A voice codec, with only a slight comparative bandwidth savings.

After the collection of an entire frame is completed, the DSP algorithms must be run on the newly created frame. The time required to complete the processing varies considerably but never exceeds the frame collection time; otherwise, the DSP would never complete processing one frame before the next frame arrived. A DSP responsible for multiple gateway channels would continually process signals from one channel to the next. The latency incurred due to the DSP process is usually specified as the frame size in milliseconds, although the actual total latency from framing and processing is actually somewhere between the framing size and no more than twice the frame size.

There are three other gateway processes that add to latency: buffering, packetization, and jitter buffer. Buffering can occur when the resulting compressed voice data frames are passed to the network. This buffering is done to reduce the number of times the DSP needs to communicate to the gateway main processor. In other situations, it is done to make the result of coding algorithms fit into one common frame duration (not length).

A multichannel gateway might be operating with different voice codecs on different channels. For example, a universal IP port interface card in a converged IP-PBX system may be handling G.729A off-premises calls across several gateway channels and G.711 premises-only calls across other gateway channels. For example, multiple G.711 frames may be collected for each G.729A frame, irrespective of the coding algorithm, to allow the transfer of one buffer per fixed period of 10 milliseconds.

As coded voice (compressed or noncompressed) is being prepared for transport across a LAN or WAN, it needs to be assembled into packets. This process typically is done by the TCP/IP protocol stack with UDP and RTP. The selection of these protocols improves timely delivery of the voice data and eliminates the overhead of transmission acknowledgments and retries. Each packet has a 40-byte header (combined IP/UDP/RTP headers) that contains the source and destination IP addresses, the IP port number, packet sequence number, and other protocol information needed to properly transport the data. After the IP header, one or more frames of coded voice data would follow.

An important consideration for voice coder selection is the decision of whether to pack more than one frame of data into a single packet. A G.723.1 voice coder (which produces 24-byte frames every 30 milliseconds) would have 40 bytes of header and 24 bytes of data. That would make the header 167 percent of the voice data payload, and a very inefficient use of bandwidth resources. The most common way to reduce the inefficiency of the IP packet overhead is to put more than one coded voice frame into each IP packet. If two frames are passed per packet, the overhead figure drops to 83 percent, but another frame period is added to the latency total. This is a trade-off dilemma of an IP telephony system. To avoid increased latency but reduce overhead, multiple voice frames across gateway channels can be transported in the same packet. When voice from another channel in the originating gateway is going to the same destination gateway, the data can be combined into a single packet. The standard H.323 protocol does not support this latency saving process, but proprietary solutions can implement it.

Jitter buffer latency is based on the variability in the arrival rate of data across the network because exact transport times cannot be guaranteed. Network latency affects how much time a voice packet spends in the network, but jitter controls the regularity at which voice packets arrive. Typical voice sources generate voice packets at a constant rate. The matching voice decompression algorithm also expects incoming voice packets to arrive at a constant rate. However, the packet-by-packet delay inflicted by the network may be different for each packet, resulting in irregular packet arrival at the gateway. During the voice decoding process, the system must compensate for jitter and does this by buffering one packet of data from the network before passing it to the destination DSP. Having these “jitter buffers” significantly reduces the occurrence of data starvation and ensures that timing is correct when sending data to the DSP. Without jitter buffers, there is a very good chance that gaps in the data would be heard in the resulting speech. Jitter buffering improves the speech quality heard by the receiving station user but incurs more latency. The larger the jitter buffers, the more tolerant the system is of jitter in the data from the network, but the additional buffering causes more latency. Most systems use a jitter buffer time of no longer than 30 milliseconds, although 20 milliseconds is the most commonly used time. Jitter buffer time is usually programmable by the system administrator. Jitter buffering can be programmed at the desktop gateway or at any network gateway node.

Beyond gateway latency, there is network latency. Network latency can occur at network interface points, router nodes, and firewall/proxy server points. Network interfaces are points at which data is passed between different physical media used to interconnect gateways, routers, and other networking equipment. Examples are the RS-232C modem and T1-interface connections to the PSTN or LAN/WAN links. For a connection to a relatively slow analog transmission circuit via a RS-232C modem, a delay of more than 25 milliseconds would occur; a T1-interface connection might incur a 1-millisecond delay; and a 100-Mbps Ethernet connection might incur a delay of less than 0.01 millisecond based on a 100-byte data packet.

Routing latency can be incurred because each packet is examined for address destination and overhead headers before directing the packet to the proper route. The queuing logic used by many currently installed routers was designed without considering the needs of IP telephony. There are problems resulting from the real-time requirement of voice communications. Many existing routers use best-effort routing, which is far from ideal for latency-sensitive voice traffic. The current IP routers support priority programming, the absence of which results in the router delaying all data during congestion situations, irrespective of the application. For example, routers supporting the IETF’s RSVP allow a gateway-to-gateway connection to establish a guaranteed bandwidth commitment on the intermediate network equipment, which would dramatically reduce the variability in packet delivery and improve the QoS. Multi-Protocol Label Switching (MPLP) is another recent router programming tool that can reduce routing latency.

Network firewalls or proxy servers that provide security between the corporate intranet and Internet must examine every incoming and outgoing IP packet. This process can incur a sizable amount of latency, so their use is almost always avoided in IP telephony applications. Routers with packet filter features can support some network security without significant added latency. Stand-alone firewalls or proxy servers must receive, decode, examine, validate, encode, and send every packet. A proxy server provides a very high level of network security but can incur more than 500 milliseconds of latency. This is not a problem to the Web-browsing applications for which proxy servers were designed, but it is clearly unacceptable for real-time voice communications requirements. This is one reason using the relatively insecure Internet as a voice network is not yet practical.

When all latency elements are added up, one-way latency can seriously affect IP Telephony QoS.

Monday

Fundamental LAN Planning Guidelines

The Cisco guide recommends a detailed analysis of the following LAN elements:

  • LAN/campus topology

  • IP addressing plan

  • Location of TFTP servers, DNS servers, DHCP servers, firewalls, network address translation (NAT) gateways, and port address translation (PAT) gateways

  • Potential location of gateways and call telephony servers

  • Protocol implementation including IP routing, Spanning Tree, VTP, IPX, and IBM protocols

  • Device analysis including software versions, modules, ports, speeds, and interfaces

  • Phone connection methodology (direct or daisy chain)

According to the Cisco guide, the significant LAN topology issues are:

  • Available average bandwidth

  • Available peak or burst bandwidth

  • Resource issues can may affect performance including buffers, memory, CPU, and queue depth

  • Network availability

  • IP phone port availability

  • Desktop/phone QoS between user and switch

  • Network scalability with increased traffic, IP subnets, and features

  • Back-up power capability

  • LAN QoS functionality

  • Convergence at Layers 2 and 3

IP addressing issues that should be reviewed are:

  • Phone IP addressing plan

  • Average user IP subnet size use for the campus

  • Number of core routes

  • IP route summary plan

  • DHCP server plan (fixed and variable addressing)

  • DNS naming conventions

Potential considerations with IP addressing include:

  • Route scalability with IP phones

  • IP subnet space allocation for phones

  • DHCP functionality with secondary addressing

  • IP subnet overlap

  • Duplicate IP addressing

The locations (or potential locations) of servers and gateways are important to ensure that service availability is consistent across the LAN infrastructure and for multiple sites. Gateways and servers in the review should include:

  • TFTP servers

  • DNS servers

  • DHCP servers

  • Firewalls

  • NAT or PAT gateways

  • Call telephony server

  • Gateway location

After determining the location of these network elements, the following issues should be analyzed:

  • Network service availability

  • Gateway support (in conjunction with the IP telephony solution)

  • Available bandwidth and scalability

  • Service diversity

IP telephony scalability and availability issues will be affected by protocols in the network. The following areas for the protocol implementation analysis are:

  • IP routing including protocols, summarization methods, non-broadcast media access (NBMA) configurations, and routing protocol safeguards

  • Spanning Tree configuration including domain sizes, root designation, uplink fast, backbone fast, and priorities in relation to default gateways

  • HSRP configuration

  • VTP and VLAN configuration

  • IPX, DLSW, or other required protocol services, including configuration and resource usage

With regard to protocol implementation, the following issues should be reviewed:

  • Protocol scalability

  • Network availability

  • Potential impact on IP telephony performance or availability

All network devices should be reviewed and analyzed to determine whether the network has the desired control plane resources, interface bandwidth, QoS functionality, and power management capabilities. The checklist for this process includes:

  • Device (type and product ID)

  • Software version(s)

  • Quantity deployed

  • Modules and redundancy

  • Services configured

  • User media and bandwidth

  • Uplink media and bandwidth

  • Switched versus shared media

  • Users per uplink and uplink load sharing/redundancy

  • Number of VLAN supported

  • Subnet size, and devices per subnet

For establishing a network baseline, it is important that the following measurements be made to determine voice quality levels and potential problem areas:

  • Device average and peak CPU

  • Device average and peak memory

  • Peak backplane use

  • Average link use (prefer peak hour average for capacity planning)

  • Peak link use (prefer 5 minute average or smaller interval)

  • Peak queue depth

  • Buffer failures

  • Average and peak voice call response times (before IP telephony implementation)

Cabling questions that may help determine the readiness of the infrastructure for IP telephony readiness include:

  • Does the building wiring conform to EIA/TIA-568A?

  • Does your organization comply with National Electric Code for powering and grounding sensitive equipment?

  • Does your organization comply with the more rigorous IEEE 1100-1992 standard for recommended practices of grounding and powering sensitive equipment?

  • Does the organization have standards for data center and wiring closet power that include circuit distribution, available power validation, redundant power supply circuit diversity, and circuit identification?

  • Does the organization use UPS and/or generator power in the data center, wiring closet, phone systems, and internetworking devices?

  • Does the organization have processes to SNMP manage or periodically validate and test back-up power?

  • Does your business experience frequent lightning strikes? Are there other potential natural disasters?

  • Is the wiring to your building above ground?

  • Is the wiring in your building above ground?

Network bandwidth consumption is required for each VoIP stream. In any conversation, two such streams are required: one in each direction. The required bandwidth per conversation will be based on several factors, but of primary importance is the codec used to digitize, compress, and convert an analog voice signal into IP format. The two codecs of most interest are G.711 and G.729A. G.711 is the TIA recommended codec to optimize IP telephony QoS because it reduces impairment of the voice signal across the network, but the signal is uncompressed and requires a high amount of bandwidth. To save on network transmission costs, G.729A is used for off-premises traffic because it uses a compression algorithm to reduce bandwidth requirements.

Network performance and capacity planning help to ensure that the network will consistently have available bandwidth for data and VoIP traffic and that the VoIP packets will consistently meet delay and jitter requirements. Cisco recommends the following six-step process for network capacity and performance planning:

  1. Determine baseline existing network use and peak load requirements

  2. Determine VoIP traffic overhead in required sections of the network based on busy hour estimates, gateway capacities, and/or CallManager capacities

  3. Determine minimum bandwidth requirements

  4. Determine the required design changes and QoS requirements based on IP telephony design recommendations and bandwidth requirements (overprovide where possible)

  5. Validate baseline performance

  6. Determine trunking capacity

Friday

LAN/WAN Design Guidelines for VoIP

There is no standard definition for QoS as it applies to real-time voice communications carried over an Ethernet LAN or IP WAN. As applied to a circuit switched PBX, QoS means consistent, reliable service delivery of control and communications signals in support of customer needs. This definition also can be used for LAN QoS in support of IP telephony. To enable LAN QoS requires all network elements, at all network layers, to work together to support a required level of traffic and service.

An IP-PBX by definition is not a virtual circuit switched communications system, like a traditional PBX, but rather a system that uses an IP network infrastructure. An IP network makes more efficient use of available bandwidth resources than does a circuit switched PBX and is designed to support the “bursty” nature of data communications traffic rather than the continuous traffic flow of real-time voice communications. IP networks can adapt to changing traffic conditions, but the level of service can be unpredictable. When used to support an IP-PBX system, the IP network must be properly designed and engineered to support the unique real-time traffic requirements of voice as opposed to less stringent data communications requirements.

QoS techniques manage bandwidth according to different application demands and network management settings but cannot guarantee a service level if resources are not available and allocated. Reserving resources for voice communications can seriously affect other network traffic. A priority for QoS network designers has been to ensure that best-effort traffic is available after resource allocations have been made. QoS-enabled high-priority voice applications must not harm lower-priority data applications.

The Internet was based on a dumb network concept with intelligent endpoints to transmit and receive datagram packets flowing through a series of network routers. IP does not deliver reliable service over the Internet: packets can be dropped by routers and are retransmitted as necessary. The service mechanism can assure data delivery, but not timely delivery. This “best-effort” service may be adequate for data networking services, but it is not good enough for voice communications.

Audio and video traffic demands sufficient bandwidth with low-latency requirements when used in two-way communications. A major challenge for network planners is to design a LAN infrastructure that satisfies an acceptable QoS level that PBX system users have grown accustomed to for their voice communications applications. A newly installed IP-PBX system in a green field location provides an ideal situation, but if a network is already installed and operating, introducing IP telephony-grade QoS should not disrupt existing services and applications.

LAN QoS levels fluctuate over time due to unanticipated changes in customer usage patterns and traffic flow. If QoS is degraded for short periods, it may significantly affect IP telephony services in ways noticeable by all system users, even if data communications services appear satisfactory. There are several reasons QoS can change:

  • Temporary excessive network usage

  • Insufficient link capacity

  • Insufficient switch/router resources

  • Traffic flow peaks

  • Traffic flow interference

  • Improper use of resources

Several basic control methods can be employed to manage QoS levels to ensure the higher grade of service level required by real-time voice communications:

  • Reserving fixed bandwidth for mission-critical voice communications applications

  • Restricting network access and usage for defined users or user groups

  • Assigning traffic priorities

  • Designating which kinds of traffic can be dropped when congestion occurs

There are several high-level decisions facing network planners and managers regarding the type of QoS-based network to be designed and operated. The network planner must decide whether network users are involved in the QoS functions or whether the network is in total control of QoS functions. If a network user has knowledge of QoS functions and a limited degree of QoS control, the network QoS is said to be implicit. If network QoS functions are predetermined and only the network administrator can program changes when needed, the network QoS is said to be explicit.

Another planning issue is whether QoS is soft or hard. Network QoS is said to be soft when there is no formal guarantee that target service levels will be met, even if QoS functions are implemented. Hard network QoS is a guarantee of service at a predefined level of QoS. Hard network QoS is usually available only with connection-mode transport, such as ATM constant bit rate (CBR) service.

Network QoS is also manageable by network design, by installing the necessary physical resources to support target service levels. IP-PBX system voice quality and availability can be determined by the physical LAN infrastructure and available cable bandwidth. Cisco Systems, a leading IP-PBX system supplier and the dominant supplier of data communications systems, has developed and published an IP telephony network planning guide. The Cisco guide is a planning tool for its CallManager IP-PBX system customers, but it is also useful as a network design guide for customers who plan to install and operate any converged or client/server IP-PBX system.

Tuesday

Summarizing Client/Server IP-PBX Design Issues

It is apparent that client/server IP-PBX designs differ across models. There are even major design differences within one supplier’s product family. For example, the Cisco Systems 7750 ICS all-in-one design is radically different from the supplier’s larger MCS 7825/7835 multiple server cluster design; Siemens offers client/server models based on a closed, embedded Windows NT server or a customer-provided server option. Client/server IP-PBX designs may be far simpler than traditional circuit switched PBXs and converged IP-PBX platforms, but there is a sufficient number of design variables to create major differences between system models.

Figure 1 shows how three layers (call telephony server, gateways, and applications) of a client/server IP-PBX can be designed and integrated into the system architecture. The choices available to a designer include a proprietary or a nonproprietary call telephony server, integrated or external advanced applications support, integrated or external gateways. Gateways and application servers may be a mix of proprietary or third-party solutions. Designers may select proprietary components for quality control and development of feature/function options capabilities not supported by third-party solutions. Third-party components may reduce system costs and provide customers with more design flexibility in their purchase decisions.

Figure 1: Client/server IP-PBX design.

Several IP-PBXs originally designed using third-party call telephony servers were redesigned with customized servers because distributors and customers often failed to use a third-party server with the technical specifications as recommended by the supplier. Several manufacturers that currently offer a proprietary call telephony server model plan to migrate their systems to a less expensive third-party server solution, but not until all the system design bugs and problems are solved after one or two system generations.

If issues related to proprietary or third-party components are not part of the customer-buying equation, there are other important design and performance criteria to be evaluated before a purchase decision is made. System reliability and survivability are as important when evaluating a client/server IP-PBX as they are in a circuit switched PBX design—port capacity, traffic handling, and call processing power. The following is a summary checklist of design performance issues for evaluating a client/server IP-PBX (many unique to an IP telephony communications system):

  • System redundancy—Fully duplicated or shared back-up components for call processing, memory, and power functions

  • System port capacity—Stations (IP, non-IP) and trunk interfaces (analog, T1/E1, IP)

  • System traffic handling capacity—Distribution of gateway channels and conference circuits

  • System call processing—BHCC

  • Supported call control protocols and interfaces—H.323, SIP, MGCP, SGCP, MEGACO, etc.

  • Voice codec support—G.711, G.723.1, G.729(A,B), GSM, etc.

  • QoS support—DiffServ, 802.1p/Q, COS, TOS, IP precedence, RSVP, dynamic jitter buffer, packet loss replacement

  • Standard messaging system interfaces—AMIS-A, VPIM, LDAP, IMAP

The first client/server IP-PBX systems were shipped less than 5 years ago, and in that time the design and performance capabilities have changed significantly. It will take several more years until the product design stabilizes, and its reliability is comparable to current circuit switched PBXs, which have been shipping for more than 25 years. Although a new generation of data communications systems seems to appear every year or two, new voice communications system platforms take many years to evolve.

Saturday

Telephony Gateways | IP PBX System Design

IP telephones, including PC client softphones, communicate directly with the call telephony server over a customer LAN/WAN infrastructure. Proprietary port circuit cards housed in proprietary port carriers are not required for signaling between the IP desktop and the common control controller, unlike converged IP-PBX designs. Non-IP stations and trunk circuits require telephony gateway interfaces to support server control signaling and voice communication transmissions. Telephony gateways for analog telephones and other 2500-type compatible communications devices, such as facsimile terminals, may be provided through a variety of design methods:

  • Integrated call telephony server gateway interfaces

  • Desktop gateway modules: proprietary, third party

  • Gateway servers/interfaces: proprietary, third party

Several proprietary, closed call telephony servers have integrated gateway interfaces for PSTN digital T1/E1 trunk circuits. The gateway interfaces usually support ISDN BRI or PRI services over the T1/E1 trunk circuits. The Mitel MN 3100, 3Com NBX, and Siemens HiPath 5300 systems have integrated PSTN digital trunk gateway interfaces. For example, the 3Com NBX’s integrated analog line card connects up to four conventional (loop start) PSTN telephone lines, and the T1/PRI trunk card connects to a standard T1 circuit. The HiPath 5300 BRI gateway interface card supports four BRI ports (8 × 64-Kbps channels); the PRI gateway interface card includes a T1 carrier interface.

Mitel Networks uses a different approach to support non-IP peripherals on its MN3300 ICP. The 3300 ICP includes an analog services unit (ASU), and a network services unit (NSU), but also supports traditional Mitel SuperSet digital telephones by a link to a peripheral equipment (PE) cabi- net. An ASU supports four analog trunks and 16 stations (including MOH, paging, and PFT); an NSU supports four T1 digital trunk interfaces. Up to four ASU and four NSU carriers are supported per controller carrier. What is unique about the system is that the call server also provides control signaling to an SX-2000 Light PE cabinet. Supporting the traditional PE cabinet protects a customer’s substantial investment in the installed base of proprietary Mitel SuperSet voice terminals.

Mitel intends the MN 3300 system to be a migration vehicle for its large installed base of SX-2000 system customers and allows customers to link existing PE cabinets to the new call telephony server through one of two options: direct optical fiber cable connection or T1 trunk interface. Customers who want a centrally located call telephony server and PE cabinet can use the optical fiber link. The DTI can support remote PE cabinets. The 3300 ICP was the first client/server IP-PBX design to support common equipment originally designed for a circuit switched PBX system. All communications traffic between digital telephones is handled internally by the PE cabinet’s integrated circuit switched TDM backplane. Calls between PE endpoints and other endpoints (IP telephones and ASU and NSU ports) are handled across the integrated controller gateway channels.

Desktop gateway modules may be proprietary or industry-standard H.323 equipment. The most common desktop gateways support 2500-type communications devices, such as analog DTMF telephones and facsimile terminals. The desktop communications device links directly to the gateway module and converts analog signals to IP format for control and communications signaling. For example, 3Com NBX analog devices are available as single-port stand-alone units and four-port chassis-based cards. The single-port ATA unit also includes an additional Ethernet port that allows an analog device and an Ethernet device to share the same Ethernet LAN cabling. The multiple-port NBX analog terminal card features four analog (FXS) ports. The units connect to a wide variety of industry-standard analog devices and fax machines and provide support for door phones, paging systems, and other applications that may require analog connectivity.

The gateways may be proprietary to an IP-PBX system, like the Siemens HiPath AP 1100 (available in one- and four-port interface models), or third-party products available from a large list of suppliers. For example, Ericsson markets a downsized version of its Webswitch IP-PBX for use as an H.323 gateway module. 3Com, a major enterprise data communications equipment supplier, is another IP-PBX supplier marketing desktop gateway modules, including those that support H.323 and SIP standards.

Another type of desktop gateway module is an add-on adapter that converts a proprietary digital PBX telephone into an IP-compatible voice terminal. A few client/server IP-PBX manufacturers, including Siemens and Nortel Networks, offer this as an option to upgrade installed digital telephones originally designed for use behind their circuit switched PBXs. The same adapters can support IP desktops behind the manufacturer’s converged IP-PBX system solutions.

Gateway servers and interface modules/boards that are not fully integrated into the call telephony server or used as desktop devices are proprietary to a manufacturer’s IP-PBX or conform to industry standards, such as H.323 or MGCP, and used as OEM solutions. One example of a proprietary solution is the Mitel Networks MN 3300 ICP gateway carrier that supports traditional analog trunks (loop start) and digital trunks (DASSII, DPNSS, QSig, Euro ISDN, and BRI) for connection to the PSTN and for connecting multiple sites or systems together. This allows multiple 3300 ICPs to be clustered or networked between multiple sites over IP or traditional TDM infrastructures to support up to 40,000 users. The MN 3300’s call telephony server carrier supports the trunk gateway interface carrier.

The Cisco Systems IP Telephony system, when originally designed as the Selsius System, used desktop modules for support of non-IP communications devices and trunk circuits. The redesigned product supports analog station, analog trunk, and digital trunk interfaces with proprietary circuit boards that are housed in Cisco Catalyst 6000 Ethernet switch carriers. Three different modules are used for analog connections: 24-port analog station FXS (H.323 or MGCP), analog trunk circuit FXO (H.323 or MGCP), and analog E&M tie trunk (H.323 only). The FXS module supports fax relay, which enables compressed fax transmission over the IP WAN. An alternative to the FXS module is the standalone Cisco VG248 analog gateway module that supports 48 fully featured analog phone lines as extensions to the Cisco CallManager system. It is housed in a compact 19-inch rack-mount chassis, and its high-density gateway can be used for analog phones, fax machines, modems, and speaker phones. Digital PSTN trunk interfaces are supported by a limited-capacity stand-alone T1 adapter module or a Catalyst 6000 T1 and services module that provides eight T1 ports (192 channels) or DS0 voice trunks. The module supports voice trunk protocols such as ISDN Primary Rate Interface (PRI) and in H2 CY ‘00, channel-associated signaling (CAS). The module’s DSP resources can also be programmed for call conference bridge services and voice codec transcoding applications, instead of digital trunk gateway interfaces.

The Nortel Succession CSE 1000 MG module supports a variety of non-IP interfaces, such as analog station, analog trunk, and digital trunk. Each MG module has three IPE card slots and can support an expansion module for four additional slots. The first Succession CSE 1000 release is limited to a maximum of four MGs (28 card slots, maximum).

Related Posts with Thumbnails

Link Exchange