Determining Trunk Circuit Requirements

There are several trunk traffic engineering steps to calculate the required number of trunk circuits needed to satisfy I/O traffic loads at an acceptable GoS level:

  1. Collect and analyze existing trunk traffic data

  2. Categorize trunk traffic by groups

  3. Determine the number of trunk circuits required to meet traffic loads

    Determine the proper mix of trunk circuit types

Trunk traffic data can be obtained from trunk traffic reports based on CDR data collected by the PBX system. The CDR data is input into a call accounting software program, available from the system manufacturer or third-party software vendor, that generates a variety of billing, internal switch network traffic, and trunk traffic reports. The CDR data does not provide information on calls that were blocked because all trunks were busy. This information is usually available from facility management reports based on optional PBX software programs. Blocked call data is used for determining GoS levels.

Historical trunk traffic data is used to forecast future trunk traffic loads to determine incremental trunk circuit requirements for the following scenarios:

  1. Station user growth or contraction

  2. Anticipated changing traffic patterns

  3. New applications, e.g., centralized VMS

Trunk traffic should be segmented across different types of trunk groups because it is more cost effective to traffic engineer smaller groups of trunk circuits with a common purpose. The first step is to segment trunk traffic into inbound and outbound directions. There are a variety of trunk group types for each traffic flow direction. For example, inbound traffic may be segmented across local telephone carrier CO and DID trunk circuits, dedicated “800” trunk circuits, FX circuits, ISDN PRI trunk circuits, and so on. Outbound trunk circuits are easily segmented into local telephone carrier CO trunk circuits, multiple interexchange carrier trunk circuits used primarily for long distance voice calls, data service trunk circuits, video service trunk circuit circuits, and so on. There is also a variety of private line trunk circuits for PBX networking applications, OPX and other trunk circuits used to support remote station users, and trunk circuits connecting to IVRs and other peripheral systems. Each trunk circuit category can also include several subtrunk groups.

To determine the number of trunk circuits per group trunk type, the traffic load must be calculated. If CDR data reports provide trunk traffic measurements in terms of seconds or minutes, the results must be expressed in terms of hours to determine how many Erlangs of traffic are carried over the trunk circuits to use the trunk traffic tables.

When using the CDR reports to calculate Erlang traffic ratings, it is important to account for call time not tracked by the CDR feature. In addition to the length of a conversation over a trunk circuit, trunk circuit holding time exclusive of talk time includes call set-up (dialing and ringing) time, call termination time, and the time trunk circuits are not available to other callers during busy signal calls and other noncompleted calls (abandons, misdials) that are not recorded and stored by the PBX system’s CDR feature. The missing CDR data time is usually calculated by adding 10 to 15 percent to the length of an average call. For example, if the total number of trunk calls is 100 and the total trunk talk time is 300 minutes, the average call length is 3 minutes. With a 10 percent missing holding time factor, an additional 18 seconds per call (3 minutes, or 180 seconds × 0.1) should be added to the 3-minute average talk time per trunk call. The 10 to 15 percent fudge factor is important and necessary to correctly determine trunk circuit requirements to maintain acceptable GoS levels.


Trunk Traffic Engineering

The number of PBX trunk circuits required to support expected inbound and outbound traffic loads is typically calculated using trunk traffic tables. The most popular trunk traffic table used for telephone system traffic engineering is based on the Erlang B queuing model. The Erlang B model assumes the following:

  1. The number of traffic sources is large

  2. The probability of blocking is small

  3. Call attempts are random

  4. Call holding times are exponential

  5. Blocked calls are cleared from the system

The last assumption is very important because it says that there is no second call attempt if the first attempt receives a busy signal. The Poisson queuing model used for PBX station traffic engineering assumes that blocked call attempts are held in the system; that is, subsequent call attempts are made. Another popular telephone system queuing model is Erlang C, based on the assumption that blocked call attempts are held in a delay queue until a trunk is available. The Erlang C model is commonly used in ACD systems to calculate required agent positions used instead of a trunk circuit: inbound calls not immediately connected to an agent position are held in queue until an agent is available.

Another use of the Erlang C model is to calculate the required number of attendant positions to handle incoming trunk calls. Calls not presented to the attendant position are queued by the PBX system. Based on incoming traffic conditions, the average 250-station PBX system may require one, two, or three attendant positions to adequately answer and forward calls with acceptable queue delay times. As the PBX system size increases, the number of attendant positions is likely to increase, but the number of incremental attendant positions does not double when station size doubles because larger attendant position groups are more efficient than smaller groups based on traffic queuing theory conditions.

Erlang B is also a very useful queuing model for analyzing alternate routing on trunk groups within a PBX, where there are usually multiple available trunk circuits across multiple trunk groups. A call that is blocked at one trunk circuit can potentially overflow to another circuit or another trunk group. Erlang B is also used for analyzing traffic conditions across multiswitch networks, where there are many potential call routes per connection.

The Erlang B trunk traffic table consists of three data parameters: probability of blocking, number of trunk circuits, and Erlangs. An Erlang is a unit of measurement for trunk traffic. The maximum traffic load a trunk circuit can handle in 1 hour is equal to 1 Erlang. An Erlang is a dimensionless unit of measure. Knowing any two of the three data parameters allows table look-up of the third data parameter. For customers with existing PBX systems, it is easy to determine the current trunk traffic handling capacities per trunk group because the GoS is a given, as is the number of trunk circuits per trunk group.


Defining PBX Traffic: CCS Rating

PBX traffic load is generally measured in 100 call-second units known as Centum Call Seconds (CCS). Centum from Latin, signifies 100. The maximum traffic load per station user during the Busy Hour is equal to 36 CCS, which is a shorthand method of stating 3600 seconds. Thirty-six CCS is equivalent to 60 minutes, or 1 hour of traffic load. A station port (telephone, facsimile terminal, modem, etc.) that “talks,” or connects, to the switch network for 10 minutes during 1 hour has a traffic rating of 6 CCS (10 minutes = 600 seconds = 600 call-second units). Combining the station user traffic load with an acceptable GoS level results in the following station user traffic requirement: 6 CCS, P(0.01). This notation signifies that a station user with an expected 6 CSS traffic load is willing to accept a 1 percent probability of call blocking when attempting to use the switch network. A 2 percent blocking probability would be expressed as 6 CCS P(0.02); a 0.1 percent blocking probability would be expressed as 6 CCS P(0.001).

A traffic rating of 36 CCS P(0.01) is used for station users who require virtually nonblocking switch network access. A 36 CCS traffic load is a worse-case situation because it is the maximum station user traffic load during the Busy Hour. The usual station user traffic rating requirement is about 6 to 9 CCS, P(0.01). Although a station user might be on a call that lasts for 1 hour or more—a 36 CCS traffic load—there is a very small probability all station users are simultaneously engaged in calls of at least 1 hour during the same Busy Hour. It is far more likely that an individual station user will have a 0 rather than a 36 CCS, traffic load during Busy Hour because that person may be in a meeting, traveling, on vacation, or too busy with paperwork to take or place telephone calls. Even if a station user makes several calls per hour, it is possible that each will be of short duration because many calls today are answered by a VMS with limited available time to leave a message. Most business-to-business calls today are connections between a station user and a VMS, and each of these calls typically last for less than 2 minutes and many last for less than 1 minute. An increasing number of callers no longer leave messages; they disconnect and send an e-mail.

The total PBX station traffic load during Busy Hour is simply the sum of the individual station user traffic requirements. If ten station users are connected to the network for 10 minutes during the same hour, the total traffic load on the switch network would be 60 CCS (10 station users × 10 minutes/station user, or 10 × 6 CCS). If the probability of blocking level was 1 percent, the traffic requirement would be noted as 60 CCS P(0.01). The total PBX station traffic load is rarely calculated, however, unless the switch network design is based on a single TDM bus or switch matrix. PBX traffic loads are better calculated for groups of station users sharing access to the same switch network element, assuming station users with similar traffic requirements are grouped together.

For switch network traffic engineering calculations, most customers use an average traffic load estimate to represent all station users instead of segmenting the station user population into like traffic load requirements. It is recommended that a different approach be used to traffic engineer a PBX system. Station users should be segmented into different traffic rating groups to ensure that switch network resources are optimized for each category of station user. In every PBX system there are some station users with very high traffic rating requirements, such as attendant console operators. Other station port types with very high traffic rating requirements include ACD call center agents, group answering positions, voice mail ports, and IVR ports. Each station port typically will have a 24 CCS traffic load, although customers usually prefer these ports to have nonblocking [36 CSS P(0.01)] switch network access and state so in their system requirements. Averaging the high traffic, moderate traffic, and low traffic station ports will result in a traffic engineered system that blocks an unacceptable percentage of calls for attendant positions because a rarely used telephone in the basement is using switch network resources instead of more important user stations.

As an example, a Nortel Networks Meridian 1 Option 81 C, based on a 120 talk slot Superloop local TDM bus design and a port carrier shelf that can typically support 384 ports, should be configured as follows to satisfactorily support the following station user traffic groups:

  1. A maximum of 120 very high traffic station users, 36 CCS, P(0.01): stations configured on a single port carrier shelf supported by a dedicated Superloop bus

  2. About 250 moderate traffic station users, 9 CCS, P(0.01): stations configured on a single port carrier shelf supported by a dedicated Superloop bus

  3. About 500 low traffic station users, 4 CCS, P(0.01): configured across two port carrier shelves supported by a dedicated Superloop bus.

A single Superloop bus can adequately support each traffic group in this example, although the number of station users differs across the group categories. If the maximum number of potential traffic sources, or station users, is no larger than 120, then the Superloop bus is rated at 3,600 CCS, P(0.01). This is the maximum traffic handling capacity of a Superloop bus. The Superloop bus is rated at slightly less than 3,000 CCS, P(0.01), if the port carrier shelf is configured for about 256 station users, according to the original Meridian 1 documentation guide. If the number of potential traffic sources increases, then the traffic handling capacity decreases for a given probability of blocking level. The exact traffic rating for a specific number of station users is available with the use of a computer-based Meridian 1 configurator. Figure 1 illustrates CCS traffic handling capabilities of a Meridian 1 Superloop with 120 available talk slots. Customers with very high traffic requirements can configure a single Meridian 1 IPE shelf with up to four SuperLoops. Each SuperLoop is dedicated to four port card slots. Figure 2 illustrates how a port carrier shelf can be segmented.

Figure 1: Meridian 1 Superloop traffic handling capability.

Figure 2: Meridian 1 IPC module Superloop segmentation.

Traffic handling capacities for any PBX system local TDM bus are comparable in concept to Meridian 1 Superloop bus ratings:

  1. If the number of potential traffic sources is smaller than or equal to the number of available talk slots, then station traffic can be rated at 36 CCS (nonblocking switch network access).

  2. If the number of potential traffic sources is larger than the number of available talk slots, then the station traffic rating is less than 36 CCS. The traffic rating will decrease as the number of potential traffic sources increases.

The traffic handling capacity of the local TDM bus declines according to an exponential equation used to calculate probability of blocking levels. Most PBX designers assume a Poisson arrival pattern of calls, which approximates an exponential distribution of call types. The exponential distribution is based on the assumption that a few calls are very short in duration, many calls are a few minutes (1 or 2 minutes) in duration, and calls decrease exponentially as call duration increases, with a very small number of calls longer than 10 minutes. The actual traffic engineering equations (based on queuing models), call distribution arrival characteristics, and station user call attempt characteristics determining the local TDM bus traffic rating at maximum port capacity (if switch network access is not nonblocking) are known only to the PBX manufacturer.

Regardless of the actual traffic engineering equation used by the manufacturer, the calculated traffic rating will be based on three inputs:

  1. Potential traffic sources

  2. Available talk slots

  3. Probability of blocking

A basic assumption used for most traffic analysis studies is a random (even) distribution of call arrivals during the Busy Hour. Traffic analysis studies also must make an assumption about call attempts that are blocked:

  1. Station users who encounter an internal busy signal on their first call attempt continue making call attempts until they are successful.

  2. Station users who encounter an internal busy signal on their first call attempt will not make other call attempts during a certain period.

In reality, station users who receive a busy signal will immediately redial. The assumption that a station user will not make another call attempt, if the first attempt is unsuccessful, is not realistic. The redial scenario is the assumption used by Poisson queuing model studies. The Poisson queuing model assumes that blocked calls are held in the system and that additional call attempts will be made until the caller is successful. For this reason, Poisson queuing model equations are commonly used by PBX traffic engineers to calculate internal switch network traffic handling capacities.

PBX systems with complex switch network designs (multiple local TDM buses, multitier Highway buses, center stage switch complexes) are far more difficult to analyze and traffic engineer than small PBX systems with a single local TDM bus design. Large, complex PBX switch network designs can provide a traffic engineer with many different switch connection scenarios that must be analyzed. Switch connections across local TDM buses require analysis of at least two switch network elements per traffic analysis calculation. To simplify traffic engineering studies, it is common system configuration design practice to minimize switch connections between different TDMs by analyzing call traffic patterns among stations users and providing station users access to trunk circuits on their local TDM buses. Centralizing trunk circuit connections may facilitate hardware maintenance and service, but it degrades system traffic handling capacity if more talk slots are used per trunk call.


Legacy PBX Switch Network Design

The most fundamental function of a PBX system is to support switched connections between peripheral endpoints. Stations users are accustomed to picking up their handset, hearing the dial tone, dialing a telephone number, and being connected to the called party. The possibility always exists that the station user receives a busy signal when the dialing process is completed. The most probable reason for a busy signal is that the called party is off-hook and engaged in another call. Infrequently, all telephone company trunk circuits are busy, and the station user hears an announcement to call again at a later time. A busy signal also may be received when all PBX trunk circuits are in use or internal switch network resources are not available. The PBX station user cannot control the availability of the called party or the availability of PSTN trunking facilities but can minimize the probability of busy signals due to blocked access to the internal switch network or local trunk circuits, if the PBX system is properly configured and engineered to meet the expected traffic demands of the customer. PBX traffic analysis and engineering tools are used to achieve acceptable customer service standards for internal switched connections and off-premises trunk calls.

Nonblocking/Blocking PBX Systems

PBX systems can be classified into two switch network design categories based on traffic engineering requirements: nonblocking and blocking. A PBX system is said to be nonblocking where no switch network traffic engineering is required because there will always will be sufficient switch network resources (local TDM bus talk slots, Highway bus communications channels, switch network interfaces, and center stage switch connections) to satisfy worse-case customer traffic demands at maximum system port capacity. Worse-case traffic demand occurs when all equipped ports are simultaneously active; that is, transmitting and receiving across the internal switch network. Although this would be a very unlikely customer situation, because PBXs are never configured at maximum port capacity and the probability is almost infinitesimal that all ports require simultaneous access to the internal switch network for communications applications, the assumption is used to define a nonblocking system.

Station users may have nonblocking access to the internal switch network but receive a busy signal when attempting to place an off-premises trunk call. The PBX system is still classified as a nonblocking PBX sys- tem because the term does not apply to access to trunk circuits or other external peripherals that may have limited port capacity, i.e., VMS. Trunk traffic engineering is an independent discipline that will be discussed later in this chapter.

A PBX system is said to be blocking if traffic engineering is required, at maximum port capacity, to satisfy worse-case traffic demand situations. For example, a small/intermediate line size PBX system based on a switch network design consisting of one 16-Mbps (256 talk slots) TDM bus can appear to be nonblocking to customers with requirements of 100 station users and 30 trunk circuits because the total number of potentially active ports is smaller than the total number of available talk slots. If the total port requirements of the customer were to grow beyond 256 ports, e.g., 240 stations and 60 trunk circuits, some ports might be denied access (blocked) to the switch network because the number of active ports may be larger than the number of available talk slots. A PBX system with a blocking switch network design can operationally function as a nonblocking system if two conditions are satisfied:

  1. The system is traffic engineered

  2. There are sufficient switch network resources to satisfy actual customer traffic requirements

The typical PBX system is usually installed and configured with a number of equipped ports with significantly less than the maximum port capacity. The switch network resources of a blocking PBX system are usually sufficient to provide nonblocking access to the equipped system ports, but as customer port requirements approach maximum port capacity, the probability of blocking increases. The probability of blocked access to the switch network is based on the potential number of active ports (communications sources) and switch network resources required to connect a call.

The most important switch network resource determining the probability of a blocked call placed by a station port is the number of available talks slots on the local TDM bus. The local TDM bus is the most likely switch network element to have insufficient resources; talk slots, because most (if not all) PBX systems are based on switch network designs with sufficient Highway bus traffic capacity to support access to the center stage switch or connect local TDM buses. Most PBXs also are designed with a nonblocking center stage switching system complex, even if local TDM bus traffic capacity is limited. The Definity G3r is a good example of a PBX system that requires traffic engineered local TDM buses, although the Highway bus/center stage switch complex used to link the local TDM buses supports nonblocked access across the internal switch network. If customer traffic requirements are light to moderate, a G3r local TDM bus (483 talk slots) can adequately support about 800 user stations. A Nortel Networks Meridian 1 Option 81C is typically configured with about 200 stations supported by a single Superloop (120 talk slots). Although the number of equipped ports is larger than the number of available talk slots, the two blocking PBX system designs are usually sufficient to support typical station user traffic demand. Customers with heavy traffic requirements would need to traffic engineer the Definity G3r/Meridian 1 Option 81C because the number of local TDM bus talk slots is smaller than the maximum number of ports, and the number of blocked calls could increase to an unacceptable level.

PBX Grade of Service (GoS)

PBX systems with blocking switch network designs are traffic engineered by the vendor when they are configured and installed based on customer traffic requirements. Customer traffic requirements are based on two parameters: required GoS level and expected traffic load. PBX GoS may be simply defined as the acceptable percentage of calls during a peak calling period that must be completed (connected) by the PBX switch network. Calls that are not completed, because the PBX switch network cannot provide the connection between the originating and destination endpoints, are known as blocked calls. The traditional method of stating a customer GoS level for a PBX system is to use the acceptable level of blocked calls instead of completed calls. The PBX’s GoS level is stated with the symbol P, representing a Poisson distribution, although it is more commonly referred to as probability. For example, P(0.01) represents the probability that one call in 100 will be blocked. This is the same as saying that 99 percent of calls will be completed. P(0.01) is the most common GOS level used for PBX traffic analysis and engineering, although customers with more stringent traffic requirements may require a GoS level of one blocked call in 1,000, or P(0.001).

The GoS level is applied during the peak call period, which is typically 1 hour. In traffic engineering analysis, this peak call period is known as the Busy Hour. The Busy Hour for most PBX customers usually occurs during the mid-morning or mid-afternoon hours, although the exact time of day will differ from customer to customer. The GoS at Busy Hour, a worse-case traffic situation, is a unit of measurement indicating the probability that a call will be blocked during peak traffic demand. There are numerous methods used to find the Busy Hour. A common method is to take the 10 busiest traffic days of the year, sum the traffic on an hourly time basis, and then derive the average traffic per hour.

If a customer does not have access to traffic data over a long period, there is a simple method to estimate Busy Hour traffic load based on daily traffic load. Busy Hour traffic for a typical 8-hour business operation is usually 15 to 17 percent of the total daily traffic. Traffic usually builds up from the early morning to mid-morning, declines as lunch hour approaches, builds up again after lunch hour to mid-afternoon, and then declines toward the end of the business day. Traffic during off-business hours is usually very light, but a 24/7 business is likely to have very different traffic patterns from a business keeping traditional 9 to 5 hours.

The Busy Hour analysis must take into account seasonal variations in customer PBX traffic demand, such as the pre-Christmas holiday period. Although the average hourly PBX traffic load may be significantly less than the Busy Hour, and early Monday morning traffic is usually less than Wednesday mid-afternoon traffic, the worse-case situation is used for traffic engineering purposes. PBX switch network resources cannot be increased and decreased for fluctuations in traffic during the day, week, month, or year.


Time Slot Availability: Blocking or Nonblocking | PBX Switch Network Issues

The terms blocking and nonblocking have been used previously. In PBX terminology, blocking is defined as being denied access to any segment of the internal switch network because there is no available talk slot or communications channel to complete a call connection. Blocked calls are characterized by busy signals. Nonblocking switch network access means that an attempt to access the internal switch network will always be successful because there is a sufficient number of talk slots or communications channels to support simultaneous call attempts by every configured station user in the system.

Blocked calls due to unavailable trunk carrier circuits are not included as part of this discussion because the issue being addressed is blocking and nonblocking access to, and connections across, the internal PBX switch network.

There are several connection points in the overall PBX system and switch network design that can cause a call attempt to be blocked (Figure 1):

Figure 1: Traffic handling red flags: blocking points.
  1. Port circuit card

  2. Local TDM bus

  3. Highway bus

  4. Switch network interfaces

  5. Center stage switch

Although it may seem strange that a call can be blocked at the port circuit card level, the number of physical communications devices supported by a port card can be greater than the number of communications channels supported by the desktop. For example, an ISDN BRI port circuit card that conforms to passive bus standards can support up to eight BRI telephones, but only two can be active simultaneously, because the BRI desktop communications link is limited to two bearer channels.

Scenarios also exist where a digital station card can support more desktop communications devices than available desktop communications channels. For example, a Siemens optiSet digital telephone equipped with two adapter modules can be connected concurrently to a second desktop optiSet digital telephone and a desktop analog telephone, with all three communications devices being supported by a single communications link to the Hicom 300H PBX and sharing the wall interface jack, inside telephony wiring, and port circuit card interface. Like the BRI port interface circuit, the optiSet interface can support only two active bearer communications channels, which means that one of the three desktop devices can be blocked from accessing the system.

Several recently introduced IP station cards supporting LAN-connected desktop telephones may also block call attempts because the number of physical telephones supported by the card can be greater than the number of local TDM bus connections supported by the card. For example, the Avaya Definity Media Processing Board for IP Telephony can support 96 IP telephones but can support between 32 and 64 connections to the local TDM bus based on the audio coder standard used for IP:TDM/PCM protocol conversion.

The local TDM bus is usually the most likely switch network element to be the cause of a blocked call. Although a greater number of current PBX systems are designed with a nonblocking switch network architec- ture, a good percentage of installed and new systems must be traffic engineered because the number of port circuit interfaces is greater than the maximum number of time slots on the local TDM for connecting the call. For example, the local 32-Mbps TDM bus supporting an Avaya Definity port network cabinet can support a maximum of 483 active ports, although the cabinet can physically support several times this number of ports. Avaya typically recommends installing 800 stations per port network cabinet for customers with moderate traffic requirements. It is unlikely that all 800 station users will attempt to place a call at the same time, but if they do only 483 time slots are available, and quite a few station users will hear a busy signal when they make their call attempt. Similarly, a Nortel Meridian 1 Superloop can support 120 active ports at full TDM bus utilization but is typically configured to support at least 200 station users. If properly traffic engineered, based on station user traffic requirements, call blocking should be minimal, but it can occur.

Most Highway buses provide nonblocking switch connections between local TDM buses and have sufficient communications channels for nonblocking access to the center stage switch complex. However, the bandwidth of the Highway bus may be less than the total bandwidth of the local TDM buses it supports and a call may be blocked if based on traffic conditions. Almost all current switch network interfaces and center stage switch complexes are also designed for nonblocking access and transmission, but exceptions do exist. For example, until Nortel recently upgraded the Meridian 1 Option 81C Sub Group Assembly module (a center stage switch complex) with a fiber optic ring design, it was possible, if not highly probable, that calls between switch network groups within the center stage could be blocked.


Time Slot Access and Segmentation | PBX Switch Network Issues

Some switch network designs are based on universal port access to the local switching network; that is all ports in a carrier shelf or cabinet can use the full bandwidth capacity of the local TDM bus. For example, any port interface circuit housed in a Definity PPN cabinet can be assigned any talk slot on the local 32-Mbps TDM bus regardless of its port circuit card slot location. A five-carrier shelf PPN cabinet has 100 port card slots, and the local TDM bus supports every port interface circuit card in the cabinet. The Definity TDM bus is said to be universally accessible to all PPN port circuit terminations. The 512 time slot (483 talk slots) TDM bus supports the traffic needs of hundreds of system ports in the cabinet.

A switch network design is said to be segmented if the local switching network is based on segmented TDM buses supporting a single port carrier shelf or cabinet. For example, the Siemens Hicom 300H LTU carrier connects to the center stage switch complex via a 32-Mbps Highway bus. The local switching network consists of two 16-Mbps segmented local TDM buses, with each local TDM bus supporting different port card slots. Although the total TDM bandwidth at the LTU carrier shelf level is 32 Mbps, the 512 time slots are divided equally between both halves of the shelf (eight port card slots per half). If the segmented TDM bus supporting port card slots 1 to 8 fails, available time slots on the second operational TDM bus are not accessible to port circuit interfaces on the port card housed in slots 1 to 8. The LTU carrier shelf is said to be based on a segmented TDM bus design. This is the downside of a segmented TDM bus design when compared with a universally accessible design. The upside is that the Siemens system can be traffic engineered to a greater degree. The local TDM bus supports only eight port card slots, a fraction of the number the Definity local TDM is required to support (Figure 1).

Figure 1: Segmented bus design.

The segmented TDM bus design of the F9600 was described earlier. Each port carrier shelf is supported by a 16-Mbps Highway bus that segments into eight 2-Mbps local TDM buses, with each bus supporting two port card slots. Minimizing the number of port card slots supported by a TDM bus is not always a good design objective because there may be less flexibility when configuring the port circuit cards. The F9600 backplane used to access the local TDM bus can support only 32 connections, which is also the maximum number of total port circuit terminations allowed for the two adjacent port cards. If 16 port circuit cards are installed, there is no problem, but problems may occur when a higherdensity digital trunk card is installed. A 24-port T1-carrier interface card will limit the flexibility in configuring the adjacent port card slot that shares the 32 connections to the 2-Mbps local TDM bus (32 time/talk slots). Only an eight-port card can be housed in the second port card slot if the configuration rules are followed. If a 16-port card was installed and only eight telephones were installed, thereby limiting the number of configured ports to 32 (24 + 8), the system configuration guidelines would still prohibit such an installation. The Fujitsu system was programmed for nonblocking switch network access only, and more ports than fixed TDM bus connections/time slots are not allowed. The TDM bus segmentation design can limit port configuration flexibility, if the number of backplane connections per TDM bus is relatively small.


Switch Network Redundancy | PBX Switch Network Issues

Switch network redundancy is a design criterion that minimizes switch network downtime for one station user, a few station users, or all station users in the system. The term redundancy is often confused with the term duplication. A switch network design incorporated with duplicated elements is said to be redundant, but a redundant switch network may not necessarily have any duplicated elements that might prevent downtime for some or all station users. Duplication is the highest form of redundancy, but it is not the only type of redundancy, particularly in PBX switch network architectures, as we will shortly see.

If a customer wants a redundant switch network design that is based on duplication of critical design elements, the checklist of duplicated elements may include any or all of the following:

  1. Center stage switch complex

  2. Local TDM buses

  3. Highway buses

  4. Switch network interface (including embedded TSI)

  5. Intercabinet cabling

Duplication of the center stage switch complex is a vital redundant switch network requirement in a centralized design topology because all calls are connected through this switch network element. Center stage switch complex errors or failure affect every call in the PBX system. Center stage switch problems may be slightly less important in a dispersed design topology, but it would still be highly desirable to have duplication of the switch network element because it is needed to con- nect all calls between local switch networks. A few PBX systems have a fully duplicated center stage switch complex as a standard design feature, such as the Nortel Networks SL-100, a modified version of the supplier’s DMS-100 central office switching system. It is more common that the duplicated center stage switch complex is available as an option, although some intermediate/large PBX systems do not offer it as a standard or optional design element. The Siemens Hicom 300H is available in two models: the large line size Model 80 has an optional duplicated center stage switch complex and the smaller Model 30 does not offer it as standard or optional.

Loss of the local TDM bus will negatively affect the communications capabilities of all ports to which it connects. Redundancy of the local TDM bus can vary between different PBX systems based on the definition of redundancy. The Fujitsu F9600 XL has a fully duplicated local switching network design, including duplication of the local TDM buses. The Avaya Definity PBXs have a redundant local TDM bus design: the local 32-Mbps TDM bus (512 time slots) supporting all of the communications needs within a Port Network cabinet is based on two independent TDM buses, each with a 16-Mbps bandwidth (256 time slots), but operating as a single TDM bus from the viewpoint of the port interface circuit cards. If one of the two 16-Mbps TDM buses fails, all system ports can still connect to the remaining TDM bus. The Siemens Hicom 300H offers a similar redundant design concept for its TDM bus architecture. Two 8-Mbps TDM buses support eight port interface circuit card slots (one half of an LTU carrier shelf), each operating independently and accessible by any of the eight port interface cards. In these Avaya and Siemens models, loss of one TDM bus will place a heavier traffic load on the remaining TDM bus and may increase the number of blocked call attempts due to the reduced number of available time slots. The major difference between the two designs, however, is that a Definity 32 Mbps TDM bus can support a five-carrier shelf cabinet with several hundred stations and associated trunk circuits, and the Siemens 16-Mbps TDM bus design supports only eight port card slots (nominally 192 ports). Failure of a Definity TDM bus segment will have greater traffic handling consequences than failure of a Siemens TDM bus segment. Siemens offers switch network redundancy at a more local level than does Avaya.

Nortel Networks has claimed that the multiple Superloop design in its intermediate/large Meridian 1 models is a form of redundancy because loss of a single Superloop affects only a limited number of the ports in a cabinet stack. The term limited, however, can be misleading because a Superloop can support up to 32 port card slots, and each port card slot can support 24 digital telephones. If strategic system ports, such as attendant consoles or trunk circuits, are affected by loss of a Superloop, the redundancy level of the design might not be acceptable.

Highway buses may be fully duplicated, or loss of a TDM bus segment comprising the Highway bus may not affect the remaining bus segments (although traffic handling capacity will be reduced). Highway buses are used for connections between local TDM buses and to provide communications paths to the center stage switch complex. Loss of a Highway bus can be significant in a centralized switch network design.

Switch network interfaces are printed circuit boards connecting local switch networks to each other or the center stage switch complex. It is an electronic switch network design element that can fail and affect communications traffic between port cabinets. A duplicated switch network interface typically links the local switching network element, such as a TDM bus, to a high-speed fiber optic cable communications link. Duplicated center stage switch complex designs usually have duplicated switch network interfaces and duplicated cabling links for intercabinet communications connections.


Distributed and Dispersed Switch Network Designs

A distributed topology is defined simply as a switch network design comprised of multiple, independent local switching networks that are connected with direct communications links instead of a center stage switch complex. Each local switching network operates independently of the others and supports all of the communications needs of the local port interface circuits it connects to. Communications between user ports housed in different cabinets require a direct communications path between each cabinet’s local switch network. There is no center stage switch complex (standard in centralized switch network designs with multiple local switch networks) in a PBX based on a distributed switch network design, which is a potential cost benefit to the customer. Another benefit of a distributed switch network design as opposed to a centralized design is its flexibility in supporting multiple location customer requirements. Without a center stage switch complex, the communications links between remote locations and the main customer site are minimized because most station user traffic is local to the cabinet’s switch network. Only intercabinet traffic requires communications link resources.

Figure 1: Distributed switching network topology.

A distributed switch network design is usually limited to PBX systems with a minimal number of local switching networks supporting two or three port cabinets. Once the number of local switching networks exceeds three, it usually becomes a cumbersome, and expensive, process to upgrade the system because of the necessity of having direct communications links between each cabinet, unless a cabinet can be used as a tandem switching node within the distributed cabinet configuration. The two most popular PBXs based on a distributed switch network design are the Avaya Definity G3si and the Alcatel OmniPCX 4400. A Definity G3si can be installed with up to three port network cabinets (a PPN control cabinet and two EPN expansion port cabinets). Each cabinet has a local switching network based on a 32-Mbps TDM bus and can be equipped with expansion interface circuit boards to connect to an EAL (see above) for intercabinet communications. There is no center stage switch complex, and each port network cabinet TDM bus functions independently.

The Alcatel OmniPCX 4400 is an example of a PBX with a distributed switch network design that can support more than three cabinets, making it the exception that proves the rule. As part of its Alcatel Crystal Technology (ACT) system architecture, a single OmniPCX 4400 system can support up to 19 discrete cabinet clusters (control cabinet and expansion cabinets); each cabinet cluster has a local TDM bus (420 two-way channels) and can be linked to other cabinet clusters over a variety of communications paths based on PCM, ATM, or IP communications standards. A single interface board in the cluster’s control cabinet can support up to 28 communications links. The bandwidth of each PCM link is 8 Mbps; the ATM link can operate at transmission rates of up to 622 Mbps. Direct links between any two cabinets can be established, or a control cabinet can function as a tandem switching node to link two or more distributed control cabinets. The availability of very high-speed communications links between cabinet clusters can minimize the number of physical transmission circuits supporting intercabinet cluster communications requirements, and the use of hop-through connections through a tandem switch node allows Alcatel to design large and very large system configurations without a center stage switch complex. Alcatel markets a multiple system version of the OmniPCX 4400, capable of supporting a maximum of 50,000 stations, and can design the network to handle communications traffic between all cabinet clusters across all systems without a center stage switch complex.

The third type of switch network design is dispersed topology. A dispersed switch network combines the design attributes of a distributed design (functionally independent local switch networks) and centralized design (center stage switch complex connecting local switch networks). A dispersed switch network design is comprised of local switch networks that support all of the local communications requirements of its connected port interface circuits and a center stage switch complex that is used only to provide switched connections between local switch networks for calls between ports connected to different local switch networks. For example, a call between two ports in the same cabinet sharing a common switch network would be connected by using only the resources of the cabinet’s local switch network, such as a local TDM bus. If a call were placed between ports in different cabinets, the call would be connected through the center stage switch complex, and access to the center stage switch complex would be via the local switching networks.

Figure 2: Dispersed switching network topology.

For example, the Avaya Definity G3r, a larger version of the Definity G3si, can support up to 40 port network cabinets. The G3r EPN expansion port cabinets are identical to the G3si cabinets; each is designed with a local switching network capable of handling all local communications requirements—calls exclusively between ports (stations and/or trunks) in the same cabinet. Calls between ports in different EPN cabinets are handled through a center stage switching complex in the PPN cabinet (common control cabinet). The Ericsson MD-110 is another example of a dispersed switch network design; communications between LIM cabinets are handled through a centralized group switch network complex, but local communications traffic remains within the LIM. The NEC NEAX2400 IPX also can be considered a dispersed switch network design because communications between ports in the same PIM cabinet (a single carrier shelf cabinet) are supported exclusively on the local TDM bus; intercabinet and intermodule group communications are supported over a hierarchy of Highway buses.


Centralized | PBX Switch Network Topologies

A centralized topology is defined simply as a switch network design that requires all calls, regardless of the origination and destination endpoints, to be connected through the same TDM bus or switch matrix or the center stage switch complex.

Figure 1: Centralized switching network topology.

Any PBX system with a switch network system comprised of a single TDM bus or switch matrix is classified into the centralized switch network design category because all calls are handled through the same “centralized” switch network element. Most small PBX systems have a switch network design based on a single TDM bus because the traffic requirements for equipped systems with fewer than 100 ports (stations and trunks) can easily be supported without multiple TDM bus requirements and/or a center stage switch complex. A single TDM bus design may also be used by PBXs with larger port capacity limits, if the TDM bus bandwidth is sufficient to support the port traffic requirements. For example, the Avaya Definity’s switch network design is based on a 32-Mbps TDM bus that can easily support the very small port (20 to 40 stations) traffic requirements of the Definity One model and the larger port (40 to 400 stations) traffic requirements of the Definity ProLogix model.

Many intermediate/large PBX system models have centralized switch network designs because a center stage switch complex handles all call connections regardless of the originating and destination call endpoints. It is easy to see the necessity of using a center stage switch complex to support switch connections between port interface circuit cards housed in different multiple carrier port cabinets because a local TDM bus is not configurable across common cabinets, and the system installation may include many port cabinets. The system switch network architecture is easier to design and program if all calls are connected with a centralized switch network element because the same call processing steps are followed for each and every type of call. It is more difficult to see the necessity of using a center stage switch complex to support switch connections between port interface circuit cards housed on the same port carrier shelf or even between ports on the same port interface circuit card, but a centralized switch network design dictates the same switch network connection protocol (center stage switch complex connection) regardless of originating and destination port interface circuit proximity.

The Nortel Networks Meridian 1 Option 81C, Siemens Hicom 300H, and Fujitsu F9600 XL models are examples of centralized switch network designs. Each of these systems can be installed with multiple port cabinet stacks, with several port carriers per cabinet stack. Each system uses a center stage switch complex to support connections between each port carrier’s local switching network (single or multiple local TDM buses), even if the two connected telephones are supported by the same port circuit interface card and connect to the same local TDM bus that connects to the same the Highway bus that connects to the center stage switch complex. It may appear a waste of switch network resources (talk slots, switch connections) to use the center stage switch complex for a call of this type, but that is the way the system is designed and programmed. Figure 1 illustrates the call communications path for a Meridian 1 Option 81C between port interface circuits in different cabinet stacks and between port circuit interface circuits on the same port circuit interface card. The call connection protocol is similar, if not identical, for the Siemens and Fujitsu systems.

A centralized switch network design offers no customer benefits, but it can be problematic because a large number of potential switch network elements (local TDM buses, Highway buses, switch network interface/buffer, TSI, center stage switch elements) are required to complete any and all calls. This can affect switch network reliability levels because the probability of switch network element failure or error affecting a call connection is increased. For example, center stage switch complex failures or errors affect all system port connections in the PBX system.

A major disadvantage of the centralized switch network design is when a customer needs to install a remote port cabinet option to support multiple location communications with a single PBX system. A remote port cabinet option requires a digital communications path between it and the main PBX system location. Most remote port cabinet installations are supported with digital T1-carrier trunk services. If the PBX switch network topology is centralized, all calls made or received by station users housed in the remote cabinet must be connected through the center stage switch complex at the main location, even if calls (intercom or trunk) are local to the remote port cabinet. A T1-carrier circuit, with a limited number communications channels, must be used for every remote cabinet call to access the center stage switch complex. Most remote PBX cabinet options require two T1-carrier channels per call connection, thus limiting the number of active simultaneous conversations at the remote location. This may force the customer to install additional T-1 carrier circuits to support the port traffic requirements at the remote location, but there are limits on how many T1-carrier interface circuit interfaces can be supported by the remote port cabinet. The limitations of the centralized switch network design may force a customer to install multiple remote port cabinets at the remote location or a standalone PBX system.


Local Switching Network Design: TDM Buses and Highway Buses

The local switching network in a PBX system can support several basic functions:

  1. Port interface circuit card access and egress into the circuit switched network

  2. Direct switched connections between port interface circuit cards

  3. Switch connections into the center stage switching complex

The primary function of the local switching network is to provide the local communication path for calls between system ports. Small PBX systems without a center stage switching complex depend on the local switching network for all communication paths between station and trunk ports. Much of the communications traffic in many intermediate or large PBX systems is carried exclusively over the local switching network without connections across the center stage switching complex, if the design topology is dispersed or distributed (see next section). When switched connections between endpoints must be made across the center stage switching complex, it is the local switching network that handles most of the call’s transmission requirements.

A PBX system’s local switching network design may be comprised of the following elements:

  1. Local TDM buses

  2. Highway TDM buses

  3. Switch network interfaces/buffers

  4. Time slot interchangers

A traditional PBX switch network local TDM bus is an unbalanced, low characteristic impedance transmission line that directly supports the traffic requirements of port circuit interface cards without intermediary TDM buses. The ends of the TDM bus are usually terminated to ground, with a separate resistor for each bit. Port interface circuit cards typically connect to the TDM bus through a customized bus driver device. A bus driver is a switchable constant current source so that, in the high “output” state during transmission, there is no bus loading to cause reflections.

A Highway TDM bus consolidates traffic from multiple lower bandwidth local TDM buses to facilitate switch network connections between local TDM buses and provide a communications path to the central stage switching complex when needed to connect the originating and destination call endpoints across different local TDM buses and Highway buses.

Although all circuit switched PBXs depend on local TDM buses for transporting communications signals to and from port interface circuit cards, the local switching network design usually differs from one system to another. The local TDM bus in a PBX system may support a few port interface circuit cards, a full port carrier shelf, or an entire port cabinet. The number of port interfaces a TDM bus can adequately support is based on its bandwidth. A limited bandwidth TDM bus that supports 32 time slots may be used to support only a few low-density port circuit cards, whereas a high bandwidth TDM bus that supports 512 time slots can easily support the traffic requirements of a high-density port carrier shelf or a moderate density port cabinet. A few examples illustrate the differences in local TDM bus design:

  1. A Fujitsu F9600 16-port card slot Line Trunk Unit (LTU) carrier is supported by eight 2.048 Mbps TDM buses (32 time/talk slots per bus); each local TDM bus supports a maximum of two port interface circuit cards (the number of ports across the two cards must be equal to or less than 32).

  2. The switch network architecture of the Avaya Definity PBX family is based on a 32-Mbps TDM bus (512 time slots, 483 talk slots) that can be configured to support a single port carrier shelf or a five-carrier shelf cabinet. Each Definity G3si/r port carrier shelf supports 20 port interface card slots. The 512 time slot TDM bus can support very high traffic requirements for a single port carrier or moderate traffic requirements across a multiple carrier cabinet if traffic engineering guidelines are used.

  3. A Nortel Meridian 1 Option 81C Intelligent Peripheral Equipment Module (IPEM), single port carrier cabinet with 16 port interface card slots, can be configured with one, two, or four Superloops (128 time slots, 120 talk slots per Superloop). A Superloop is the Nortel Networks name for its Meridian 1 local TDM bus. The IPEM port carrier shelf can have access and egress to 120, 240, or 480 talk slots; the number of configured Superloops depends on the traffic capacity requirements of the local ports. Basic traffic requirements can usually be supported by a single Superloop, but nonblocking switch network access requirements may dictate four Superloops per IPEM. A single Superloop can also be configured to support two IPEM stackable cabinets, with a total of 32 port card slots (a maximum of 768 voice ports), if there are very low traffic requirements.

The Fujitsu example illustrates a PBX system with multiple local TDM buses per port carrier shelf, with each local TDM bus supporting only two port cards. The Avaya example illustrates a PBX system with a local TDM bus designed for and capable of supporting a multiple port carrier cabinet capable of housing dozens of port cards and hundreds of ports. The Nortel example illustrates a flexible local TDM bus design that can support low, medium, or high traffic requirements per port carrier shelf by provisioning the appropriate number of local TDM buses.

Even though the Fujitsu, Avaya, and Nortel PBXs use local TDM buses to provide a communications path for port interface circuit cards, the bandwidth of the TDM buses and the number of TDM buses per carrier shelf or cabinet varies among the three systems. There is no standard for local TDM bus bandwidth and provisioning in a PBX system. The concept is the same, but the implementations differ.

In the Fujitsu example, the backplane of the port circuit cards connects directly to the local TDM bus. The Definity port carrier backplane also provides a direct connection to the local TDM bus. In the Nortel example, a Superloop bus supports the communication transmission needs of the port interface circuits in an IPEM cabinet, but there is no direct link between the cards and the Superloop. An interface card is used as a buffer to link the carrier shelf backplane to the electrical transmission wire operating as the Superloop TDM bus. The switch net- work buffer function is embedded on the IPEM Controller Card, which also provides local processing functions to the port carrier shelf.

Switch network interfaces/buffers are used to consolidate communications signals from multiple port interface circuit cards for access to and egress from the local TDM bus. These specialized interface cards may be dual function interfaces because several PBX switch network interface/buffer cards also have an on-board microprocessor controller used for localized processing functions.

The Siemens Hicom 300H has an interface card similar to the Nortel Meridian 1 to support both switching and processing functions as the local port carrier level. The Siemens Line Trunk Unit Controller (LTUC) card provides a link between the main system processor and the port interface circuit card microcontrollers and also serves as a buffer interface between the high-speed 32-Mbps Highway transmission bus (512 time slots) and two segmented TDM buses (256 time slots per TDM bus, 128 time slots per segment) that connect directly to the port circuit interface cards. The Hicom 300H LTUC functions like a TSI because it is multiplexing several moderate bandwidth TDM buses onto a higher bandwidth TDM bus.

From high-level diagrams, it appears that the Nortel and Siemens switch network designs are very similar, but major differences exist. The Meridian 1 Superloop functions as a local TDM bus but requires a buffer interface to link to the port interface carrier; the Hicom 300H has four TDM bus segments directly connected to the LTU port circuit interface cards and requires the LTUC, functioning as a TSI, to link to the Highway bus. The Meridian 1 Superloop and Hicom 300H Highway Bus provide a communications to the central stage switching complex of their respective PBX systems, but the design structures are not identical.

The NEC NEAX2400 IPX also uses a TSI to multiplex local TDM buses onto a higher bandwidth Highway bus. A 384 time slot local TDM bus supports each NEAX2400 PIM single carrier shelf cabinet. Up to four PIMs can be stacked together, and the individual local TDM buses communicate over a common 1,536 time slot Highway bus. A TSI links each PIM’s local TDM bus to the Highway bus. Multiple Highway buses across cabinet stacks communicate over a higher bandwidth Highway bus. In the largest NEAX2400 configuration, a Super Highway bus links Highway buses across the entire switch network complex. The broadband Super Highway bus functions as a center stage control complex but only is used for switched connections between local Highway buses. Most system traffic is localized at the PIM cabinet level and uses the Highway and Super Highway buses infrequently if the system is proper- ly engineered. The Highway bus design of the NEAX2400 is not nonblocking for a worse case traffic situation but is essentially nonblocking for most customer requirements.

Highway buses typically operate at very high transmission rates because they are required to provide the communications path across many local TDM buses or between many local TDM buses and the center stage switch complex. The terminology used to describe a PBX switch network system Highway bus varies from system to system, but the function is essentially the same. Avaya calls the optical fiber cable link used to connect Definity Port Network cabinets an Archangel Expansion Link (AEL), and Ericsson calls its MD-110 LIM cabinet communications links FeatureLinks (formerly PCM links), but the two perform the same primary function: linking port cabinets together directly or through a center stage switch complex. The Definity AEL is always a fixed high-bandwidth optical fiber link and provides nonblocking access to the center stage switch complex for each port network cabinet TDM bus. The Ericsson FeatureLink operates at only 2 Mbps and can support only 32 time slots (30 talk slots). Based on customer traffic requirements, up to four FeatureLinks can be equipped per LIM, for a total bandwidth of 8 Mbps, to support a maximum of 120 talk slots. The limited number of talk slots supported by the MD-110’s Highway bus would seem to cause switch network access problems, but analysis of customer traffic patterns (inbound trunk calls, outbound trunk calls, intercom calls) indicates that the four FeatureLink capacity is more than sufficient for most customer configurations.


Broadband TDM Bus

Most local TDM buses have limited bandwidths capable of supporting between 32 and 512 time slots. A TDM bus functioning as a center stage switching complex capable of supporting switch connections between many local buses must have a transmission bandwidth equal to or greater than the total bandwidth of the local TDM buses it supports for nonblocking access. For example, a single TDM bus with a bandwidth of 128 Mbps (2,048 time slots) can support switch connections for sixteen 8-Mbps TDM buses or four 32-Mbps TDM buses.

The center stage TDM bus must also support a sufficient number of physical link connections to support all local TDM buses. If the bandwidth of the center stage TDM bus is not sufficient to support switched connections for every local TDM bus time slot, there is a probability of blocking between the local TDM bus and the center stage TDM bus. The number of local TDM bus connections is always limited to ensure nonblocking access to the center stage TDM bus.

Local TDM buses typically interface to the center stage TDM bus through a switch network element known as a Time Slot Interchanger (TSI). The TSI is a switching device embedded on the physical interface circuit card that supports the physical local/center stage bus connection. The primary function of a TSI is to provide time slot connections between two TDM buses with different bandwidths. The simplest definition of a TSI is a portal between the local TDM bus and the center stage bus.

If a single broadband TDM bus cannot support nonblocking connections for all of the installed and configured local TDM buses, it may be necessary to install additional center stage TDM buses. A center stage switching complex based on multiple high bandwidth TDM buses requires connections between each center stage bus, in addition to switched connections to the local buses. Switched connections between any two local TDM buses in the PBX system may require transmission across two center stage buses, which are linked together, because each center stage TDM bus has dedicated connections to a select number of local TDM buses. The bandwidth connections between the high-speed center stage TDM buses must be sufficient to support the port-to-port traffic needs of the local TDM buses. For this reason, system designers use very high-speed optical fiber connections to ensure the switched network traffic requirements.

Single-Stage Circuit Switch Matrix

The most popular center stage switching design is a single-stage circuit switch matrix. A single-stage circuit switch matrix is based on a physical crosspoint switched network matrix design, which supports connections between the originating and destination local TDM buses. A single-stage circuit switch matrix may consist of one or more discrete switch network matrix chips. Most small/intermediate PBX systems use this type of design because of the limited number of local TDM buses needed to support port circuit interface requirements.

The core element of a crosspoint switching matrix is a microelectronic switch matrix chip set. The switch matrix chip sets currently used in PBXs typically support between 512 and 2,048 nonblocking I/O channels. A 1K switch matrix supports 1,024 channels; a 2K switch matrix supports 2,048 channels. Each channel supports a single TDM bus time slot. Larger switch network matrices can be designed with multiple switch matrix chips networked together in an array.

Based on the size of the switch network matrix and the channel capacity of a single chip set, a center stage switching complex may require one or more printed circuit boards with embedded switch matrix chip sets. The number of chips increases exponentially as the channel (time slot) requirements double. For example, if a single 1K switch chip can support 1,024 I/O communications, four interconnected 1K switch chips are required to support 2,048 I/O channels. Doubling the number of channels to 4,096 will require 16 interconnected 1K switch chip sets. Large single-stage switching networks use a square switching matrix array, for example, a 2 × 2 array (four discrete switch matrix chip sets) or a 4 × 4 array (16 discrete switch matrix chip sets).

A 1K switch matrix can support any number of TDM buses with a total channel (time slot) capacity of 1,024, for example, eight 128 time slot TDM buses or four 256 time slot TDM buses. The total bandwidth (time slots) of the networked TDM buses cannot be greater than the switch network capacity of the center stage switch matrix. The physical connection interfaces for the TDM buses are usually embedded on the switching network board, but this is not always the case. The intermediate/large Nortel Networks Meridian 1 models require an intermediary circuit board, known as a Superloop Card, to provide the switch connection between the local TDM buses (Superloops) and the center stage 1K group switch matrix.

Multistage Circuit Switch Matrix

A single-stage circuit switch matrix design is not feasible for the center stage switching system complex of a large or very large PBX system because such a system would have a system traffic requirement for as many as 20,000 time slots. A very large array of switch matrix chip sets would lead to design complications and require several switch network array printed circuit boards. The better switch matrix design solution for a large or very large PBX system is a multistage design. The most common multistage switch network design type is a three-stage network design known as a Time-Space-Time (T-S-T) switch network. A T-S-T switch network connects three layers of switches in a matrix array that is not square (Figure 3-5).

Image from book Figure 3-5: TDM bus connections: center stage space switch matrix.

In a T-S-T switching network design, each switch network layer consists of the same number of switch matrix chips. The first switch network layer connects the originating local TDM buses to the second switch network layer; the third switch network layer connects to the second switch network layer and the destination local TDM buses. In this design, the second network switch layer is used to connect the first and third layers only, with no direct connection to the local TDM buses. The term Time-Space-Time was derived from the fact that the first and third switch network layers connect to TDM buses, and the second switch network layer functions solely as a crosspoint space connection switch for the two outer layers.

In a T-S-T switch network configuration, each TDM bus channel entering the first switch network layer has access to each outbound switch connection to the second switch network layer. In turn, each outbound switch connection in the second switch network layer has access to each switch connection in the third switch network layer. Each switch matrix in the first and second layers is connected according to the same pattern.

The T-S-T switch network is contained on a combination of printed circuit boards. Multiple first and third layer switch matrix chip sets may be packaged on a single board, although the usual design is a single switch matrix per board to simplify connections between the local TDM buses and the second switch network layer. Multiple second layer switch matrices are usually packaged on a single board. The total number of boards required for the center stage switching complex will depend on the number of I/O TDM channels configured in the installed system. An 8K switch network will require fewer boards than a 16K switch network.

ATM Center Stage

During the early 1990s, it was believed that traditional circuit switched voice networks would someday be replaced by ATM switch networks. Several PBX manufacturers worked to develop a PBX switch network based on ATM switching and transmission standards. An ATM switching network can provide the same high quality of service as traditional circuit switched networks can for real-time voice communications; it also offers the additional advantage of very high switching and transmission rates. Lucent Technology’s enterprise communications system division (now Avaya) and Alcatel developed, announced, marketed, and shipped ATM center stage switching options for its largest PBX models. Implementing the ATM center stage switching option requires a stand-alone ATM switching system equipped with customized interface cards to connect to the PBX processing and switch network subsystems. A gateway interface card is used to link the local TDM buses to the ATM switching complex for intercabinet communications. The gateway interface card converts communications signals from time-based PCM format to ATM packet format.

Shipments of the option have been negligible since its introduction for two important reasons: few customers have installed ATM-based LANs, opting instead to upgrade their IP-based Ethernet LAN infrastructure, and the cost to install the PBX option is greater than the cost of a traditional TDM/PCM center stage switching complex. In addition to the cost of the ATM switching system, there is the cost of high-priced interface cards used to convert TDM/PCM communications signals to ATM format for connecting the local TDM buses to the center stage switch complex. Nortel Networks tested an ATM-based version of its Meridian 1, but canceled development in the late 1990s after determining that the cost to upgrade a customer’s installed system was too high.

The Avaya Definity ECS and Alcatel OmniPCX 4400 ATM-based offerings are still being marketed, but too few customers have shown enough interest to make it a viable center stage switching option for the future. Growing market demand for IP-based PBX systems appears to have stunted development of the ATM center stage switching option.


PBX Circuit Switching Design | Fundamentals of PBX Circuit Switching

PBX circuit switched network designs differ between each manufacturer’s product portfolio and even among models within a portfolio. Although there are differences in the individual PBX system switch network designs, the main functional elements are the same. All port circuit interface cards transmit and receive communications signals via a directly connected TDM bus, but the time and talk slot capacities are likely to differ between systems. A very small or small PBX system switching network design may consist of a single TDM bus backplane connected to every port interface circuit card, but a larger PBX system with more than one TDM bus must be designed to provide connections between the TDM bus segments. The TDM bus connections may be direct connections or center stage switch connections. The center stage switching system complex may be based on a space switch matrix design using circuit switched connections or a broadband TDM bus interconnecting lower bandwidth TDM buses. Two of the leading suppliers of PBX systems, Avaya and Alcatel, also offer customers of their very large PBX system models a center stage ATM switching option that can also support switched LAN data communications applications.

Center Stage Switch Complex

The primary function of the center stage switching complex is to provide connections between the local TDM buses, which support port carrier interface transmission requirements across the internal switching network. Complex center stage switching systems may be used in PBX systems designed for 100 user stations, although the smaller systems typically have a single TDM bus design or multiple TDM buses with direct link connections between each bus. A center stage switching complex may consist of a single large switching network or interconnected switching networks.

A very small PBX system usually does not require a center stage switching complex because the entire switching network might consist of a single TDM bus. Individual TDM bus switch network designs require a TDM bus with sufficient bandwidth (talk slots) to support the typical communications needs of a fully configured system at maximum port capacity. Most small PBX systems based on a single TDM bus design can provide nonblocking access to the switch network at maximum port capacity levels. If the TDM bus has fewer talk slots than station and trunk ports, the switch network can still support the communications traffic requirements, if properly engineered.

There are a few small and intermediate PBX systems that have multiple TDM buses but no center stage switching complex. For example, an Avaya Definity G3si can support up to 2,400 stations and 400 trunks using three-port equipment cabinets, each with a dedicated TDM bus, but does not use a center stage switching complex to connect the TDM buses. PBX system designs like the Definity G3si use direct cabling connections between each TDM bus for intercabinet connections between ports. This type of design can support a limited number of TDM buses without a center stage switching complex, but more TDM buses require more direct connections between each bus. When the system design includes more than three TDM buses, the switch network connection requirements may become unwieldly and often very costly. During the 1980s the Rolm CBX II 9000 supported up to 15 port equipment nodes that required dedicated fiber optic cabling connections to link each cabinet’s TDM bus switching network because it lacked a center stage switching complex. A fully configured system required 105 direct link connections (fiber cabling, fiber interface cards), resulting in a very costly alternative to a center stage switching complex. Every new nodal addition to the system required new fiber optic connections to every existing cabinet node. The advantage of a center stage switching com- plex in an intermediate/large PBX system design is to simplify switch network connections between endpoints.

There are several center stage switch designs typically used in digital circuit switched PBXs:

  1. Broadband (very large bandwidth) TDM bus

  2. Single-stage switch matrix

  3. Multistage switch matrix

Related Posts with Thumbnails

Link Exchange