Google
 

Wednesday, August 29, 2007

Differences Between GSM & CDM




Differences between
the GSM and CDMA Wireless Networks  
 

  GSM and CDMA have been the two leading commercial wireless technologies that are being used all over the world.  This paper presents to the readers the key differences between the two technologies 1. The various topics in which this paper presents the difference are:

  • Radio Spectrum Usage
  • Network architecture differences
  • Radio channel differences
  • Call Processing
  • Evolution to 3G
  • Network capacity differences
  • Deployment

Introduction

This section presents the basic wireless network architecture and lays the foundation for the readers to understand the later sections of this paper.

Though this paper concentrates on the differences between these networks, but the basic network architecture for both these networks is same.
The diagram below presents the general architecture of a wireless network. 
 
1:  This paper concentrates mostly on the differences in the BSS. 
Figure 1: General Architecture of Wireless Networks 

The Mobile Station

The Mobile Station (MS) is the user equipment in Wireless Networks.. Production of Mobile Stations is done by many different manufacturers, and there will almost always be a wide range of different Mobile Stations in a mobile network. Therefore the specifications specify the workings of the MS in great detail.

The Base Transceiver Station

The Base Transceiver Station (BTS) is the entity corresponding to one site communicating with the Mobile Stations. Usually, the BTS will have an antenna with several TRXs (radio transceivers) that each communicates on radio frequency. The link-level signaling on the radio-channels is interpreted in the BTS, whereas most of the higher-level signaling is forwarded to the BSC and MSC 

The Base Station Controller

Each Base Station Controller (BSC) control the magnitude of several hundred BTSs. The BSC takes care of a number of different procedures regarding call setup, location update and handover for each MS. The handover control procedures will come especially into focus in this thesis. It is the BSC that decides when handover is necessary. This is accomplished by analyzing the measurement results that are sent from the MS during a call and ordering the MS to perform handover if this is necessary. The continuous analyzing of measurements from many MSs requires considerable computational power. This put strong constraints on the design of the BSC.

The Mobile Switching Center

The Mobile Switching Center is a normal ISDN-switch with extended functionality to handle mobile subscribers. The basic function of the MSC is to switch speech and data connections between BSCs, other MSCs, other Wireless networks and external non-mobile-networks. The MSC also handles a number of functions associated with mobile subscribers, among others registration, location updating and handover. There will normally exist only a few BSCs per MSC, due to the large number of BTSs connected to the BSC. The MSC and BSCs are connected via the highly standardized A-interface. However, due to the lack of standardization on Operation and Management protocols, network providers usually choose BSCs, MSCs and Location Registers from one manufacturer.

The Location Registers

With each MSC, there is associated a Visitors Location Register (VLR). The VLR can be associated with one or several MSCs. The VLR stores data about all customers who are roaming withing the location area of that MSC. This data is updated with the location update procedure initiated from the MS through the MSC, or directly from the subscriber Home Location Register (HLR). The HLR is the home register of the subscriber. Subscription information, allowed services, authentication information and localization of the subscriber are at all times stored in the HLR. This information may be obtained by the VLR/MSC when necessary. When the subscriber roams into the location area of another VLR/MSC, the HLR is updated. At mobile terminated calls, the HLR is interrogated to find which MSC the MS is registered with. Because the HLR is a centralized database that need to be accessed during every call setup and data transmission in the GSM network, this entity need to have a very large data transmission capacity suggests a scheme for distributing the data in the HLR in order to reduce the load.
The communication between MSC, VLR and HLR is done using the MAP (Mobile Application Part) of the Signalling System 7. The MAP is defined in  and will be further discussed in

Historical View of GSM and CDMA

GSM
 
The first step towards GSM was the allocation of a common frequency band in 1978, twice 25 MHz, at around 900 MHz for mobile communication in Europe.  In 1990, the GSM specifications for 900 MHz were frozen.  In 1990 it was decided that GSM 1800  
GSM radio interface     GSM Phase 2+
8 channels per carrier    Adaptive multirate coder
200 – KHz carrier bandwidth   14.4 Kbp data service
Slow frequency hopping   General pocket radio service
Enhanced data rates using optimised modulation (EDGE)
Table 1 shows the time schedule of GSM.
Table 1 – GSM Development Time Schedule
 
1982
Groupe Special Mobile established within CEPT
1984
Several proposals for GSM multiple access : wideband TDMA, narrowband TDMA, DS- CDMA, hybrid CDMA/FDMA, narrowband FDMA
1986
Eight prototype systems tested in CNET laboratories in France
Permanent nucleus is set up
1987
Basic transmission principles selected : 8-slot TDMA, 200-kHz carrier spacing, frequency hopping
1987
MoU signed
1988
GSM becomes an ETSI technical committee
1990
GSM phase 1 specifications frozen (drafted 1987 – 1990)
GSM1800 standardisation begins
1991
GSM1800 specifications are frozen
1992
GSM900 commercial operation starts
1992
GSM phase 2+ development starts
1995
GSM submitted as a PCS technology candidate to the United States
1995
PCS1900 standard adopted in the United States
1996
Enhanced full rate (EFR) speech codec standard ready
1996
14.4-Kbps standard ready
GSM1900 commercial operation starts
1997
HSCSD standard ready
GSM cordless system (home base station) standardisation started
EDGE standardisation started
1998
GPRS standard ready
WCDMA selected as the third generation air interface

Classification of CDMA

 based on the modulation method
 
CDMA : direct sequence (DS)
CDMA : frequency hopping (FH)
CDMA : time hopping (TH)
                        
 Frequency 
Direct sequence 
 Frequency hopping 
 
Time hopping 
                                                                                                                   Time 
[1] In DS- CDMA, spectrum is spread by multiplying the information signal with a pseudo-noise sequence, resulting in a wideband signal. 
[2] In FH- CDMA.  In the frequency hopping spread spectrum, a pseudo-noise sequence defines the instantaneous transmission frequency.  The bandwidth at each moment is small, but the total bandwidth over, for example, a symbol period is large.  Frequency hopping can either be fast (several hops over one symbol) or slow (several symbols transmitted during one hop). 
[3] In TH- CDMA, in the time hopping spread spectrum, a pseudo-noise sequence defines the transmission moment. 
CDMA era, as shown in table 2 
 
Table 2 – CDMA Era
       Pioneer Era 
1949
John Pierce : time hopping spread spectrum
1949
Claude Shannon and Robert Pierce : basic ideas of CDMA
1950
De Rosa-Rogoff : direct sequence spread spectrum
1956
Price and Green : antimultipath "RAKE" patent
1961
Magnuski : near-far problem
1970s
Several developments for military field and navigation systems

      Narrowband CDMA Era

1978
Cooper and Nettleton : cellular application of spread spectrum
1980s
Investigation of narrowband CDMA techniques for cellular applications
1986
Formulation of optimum multiuser detection by Verdu
1993
IS-95 standard
 
      Wideband CDMA Era

 
 
 
WCDMA1995 -
Europe          : FRAMES FMA2                      
Japan             : Core-A
USA              : cdma2000

Korea            : TTA I, TTA II

2000s
Commercialization of wideband CDMA systems

 
Table 3 shows the technical parameters of second generation systems.  All these systems are frequency division duplex (FDD) systems.  They transmit and receive in different frequency bands.  Time division duplex (TDD).  The actual data rate available in commercial systems is usually much smaller.  In 1998 GSM supports 14.4 Kbps, IS-95 9.6 Kbps, IS-136 9.6Kbps and PDC 9.6 Kbps. 
 
 
 Table 3 – Second Generation Digital Systems
 
 
GSM
IS-136
IS-95
PDC
Multiple access
TDMA
TDMA
CDMA
TDMA
Modulation
GMSKa
ð/4-DQPSKb
Coherent ð/4-
DQPSK
Coherent 8-PSK
QPSK/0-QPSKc
ð/4-DQPSK
Carrier spacing
200 kHz
30 kHz
1.25 MHz
25 kHz
Carrier bit rate
270.833 Kbps
48.6 Kbps (ð/4-PSK and ð/4-DQPSK) 72.9 Kbps (8-PSK)
1.2288 Mchip/sd
42 Kbps
Frame length
4.615 ms
40 ms
20 ms
20 ms
Slots per frame
8/16
6
1
3/6
Frequency band (uplink/
downlink)
(MHz)
880-915 / 935-960
1720-1785 /
1805-1880
1930-1990 /
1850-1910
824-849 / 869-894
1930-1990 /
1850-1910
824-849/869-894
1930-1990 /
1850-1910
810-826 /
940-956
1429-1453/
1477-1501
Speech codec
RPE-LTPe 13 Kbps
Half rate 6.5 Kbps
Enhanced full rate
(EFR) 12.2 kbps
VSELPf  8 Kbps
IS-641-A: 7.4 Kbps
(ACELP)g
US1: 12.2 Kbps
(ACELP)
QCELP 8 Kbps
CELP 8 Kbps
CELP 13 Kbps
VCELP
6.7 Kbps
Maximum possible data rate
HSCSD:115.2 Kbps
GPRS : 115.2 –
182.4 Kbps
(depending on the coding)
IS-136+: 43.2 Kbps
IS95A:14.4 Kbps
IS95B:115.2 Kbps
28.8 Kbps
Frequency hopping
Yes
No
N/A
No
Handover
Hard
Hard
Soft
Hard

  
a  Gaussian minimum shift keying
b  Differential quadrature phase shift keying
c  Offset QPSK
d  A "chip" is used to denote a spread symbol in DS- CDMA systems
e  Regular pulse excited long term prediction
f  Vector sum excited linear predictive
g  Algebraic code excited linear predictive 
 
 
 

Comparison of Technologies

Frequency Division Multiple Access (FDMA):

 
The frequency spectrum is divided into number of narrow band channels. These channels are assigned to users. Therefore, users transmit in their assigned frequency range. This is the assigned dynamically. The frequency range can be reassigned once the call is completed. The frequency assigned serves as channel identifier.

Time Division to Multiple Access (TDMA):

 
As in FDMA, TDMA divides the spectrum into narrow band channels. However, in TDMA, the same channel is assigned to multiple users. The available time is divided into a number of time slots. These slots are assigned to users sharing the same channel. Thus, TDMA provides more spectral efficiency than FDMA. The capacity is increased N times, where N is the number of timeslots within in a channel. Thus, N  users  can be accommodated in a channel. The frequency assignment, along with the assigned time slot, serves as a channel identifier. This technology is used in GSM

Code Division Multiple Access ( CDMA):

 
In CDMA , all users share the wideband spectrum. Each user is spread with a pseudo-random binary sequence. The wide band frequency assignment (common to all users)  along with a pseudo-random sequence serves as the channel identifier. 

Network Architecture

 
This section presents the differences between the GSM and CDMA network architectures. 
The diagram below shows the GSM network architecture: 
 
 
 
The diagram below shows the IS-95 based CDMA network architecture: 
 

Mobile Station:

 
GSM :
The mobile station (MS) consists of the mobile equipment (the terminal) and a smart card called the Subscriber Identity Module (SIM). The SIM provides personal mobility, so that the user can have access to subscribed services irrespective of a specific terminal. By inserting the SIM card into another GSM terminal, the user is able to receive calls at that terminal, make calls from that terminal, and receive other subscribed services.
The mobile equipment is uniquely identified by the International Mobile Equipment Identity (IMEI). The SIM card contains the International Mobile Subscriber Identity (IMSI) used to identify the subscriber to the system, a secret key for authentication, and other information. The IMEI and the IMSI are independent, thereby allowing personal mobility. The SIM card may be protected against unauthorized use by a password or personal identity number.  
 
 
CDMA :
One of the biggest drawbacks of the CDMA mobile stations is the absence of the SIM card. As a result of this, a user's identity is fixed to a handset.
Electronic Serial Number (ESN) uniquely identifies the mobile equipment. ESN is a 32bit number assigned by the mobile station manufacturer.
An IMSI and ESN are linked in the operator database to uniquely identify a subscriber.

Cell Design


 
In CDMA , the same 1.233 MHz wideband channel may be reused in all the cells. Therefore, adjacent cells may use the same frequency; thus the frequency reuse factor is 1. This greatly simplifies the frequency planning.
On the other hand in GSM, the frequency assignments in one cell cannot be reused in adjacent cells. Hence, frequency assignments in each cell have to be carefully allocated to avoid interference from adjacent cells. 
 

Base Station Sub-System (BSS):

 
An important component of the BSS, which is considered in the canonical GSM architecture as part of the BTS is TRAU, or the Transcoder/Rate Adapter Unit. The TRAU is the equipment in which the GSM specific speech encoder and decoding is carried out, as well as the rate adaptation in the case of data. Although the GMS specifications consider the TRAU as part of the BTS, it can be sited away from the BTS and in many cases it is actually between the BSC and MSC. Having the TRAU as close to MSC saves a lot on the 64kbps link between the BSC and the MSC. 
Where as in CDMA , the TRAU is called the Vocoders and they are considered as part of the BSC. 
Another key difference in the BSS is that the CDMA BSS gets the time synchronization between the various Network elements using the GPS, where as in GSM is it controlled by the MSC and BSS interface.

Radio Interface Differences

 
The radio interface in the wireless systems provides the link between the fixed infrastructure of different operators and the mobile station of various manufacturers. 
The radio interface serves two main functions:
  • To transport user information, both speech and data – bi-directional.
  • To exchange signaling information between the mobile station and the network.

Uplink and Downlink differences:

 
The radio link directed from the mobile station to the network is called the uplink. This is also referred to as the reverse link in CDMA networks.
The radio link directed from network to the mobile station is called the downlink. This is referred to as the forward link in the CDMA networks. 
 
 
 
Channels are used in pair for full duplex communications. Thus, GSM uses both uplink and downlink bands of a given spectrum.
In other words, a physical channel refers to a pair of frequencies used for a cellular radio talk path. One is used for the cell site to mobile transmission while the other is used for the mobile to the cell site transmission.
GMS signal requires channels spacing of 200kHz. 
In CDMA two types of PN codes are used for differentiating the forward and the reverse links. 
Short Codes 
These PN codes are generated with a register length of 15. The length of the code is 215-  (32,768) bits. Generated at the rate of 1.2288MHz, these codes repeat every 26.67 msec. Each base station generates a short code with a different offset that identifies the base station. 
 
 
 Long Code
There is only one long code, it is defined in the standard, and it is used by all IS-95 and cdma 2000 systems. The long PN code is generated with a register length of 42. Generated at the rate of 1.2288MHz, this code repeats in approximately in 41 days. In the reverse direction, the long code is used for spreading (mobile to the base station) and to uniquely identify each channel.  When the mobile needs to uniquely identify itself or a channel using the long code, it applies a long code mask to the long code, which results in a time shifted version of the long code. The receiver applies the same mask to recover the data. 

Logical Channel differences

Both GSM and the CDMA networks have a lot of similarities in the way the logical channels are defined. 
In brief both these networks have a
  • Channel, which is used by the mobile to acquire the system. This is called the Pilot channel in CDMA whereas it is called the FCCH in GSM .
  • A channel used by the mobile to synchronize to the network.  This is called Synch channel in CDMA and in GSM it is called SCH.
  • Channel to transmit the system wide information and also page the mobile for the termination calls. This in GSM is achieved by two channels called BCCH and PCH, where as in CDMA a single Paging channel does this.
  • Traffic channels.
 
The diagrams below shows the logical channel structures of both CDMA and GSM networks. 
 
 
 
 
 
 
The major difference between the GSM and the CDMA logical channels is how they are identified. In GSM each logical channel is pre-assigned to a particular time slot and in CDMA they are identified by a pre-assigned Walsh code.
And also in the traffic channel, during the call setup in a GMS the mobile is assigned to a time slot whereas in CDMA a particular Walsh code is assigned.

Call Processing

 
Both GSM and CDMA networks have similar call setup flows for the origination and the termination of calls and location management.
But the major difference is in the CDMA networks, which has both hard handoff and soft handoff whereas GSM networks have only hard handoffs. 
Another major difference is how both these networks handle the Near-Far effect.
In GSM , during traffic a time slot is allocated for the mobile, when the mobile moves far away from the base station its round trip delay increases and the mobile tends to drift to another user time slot. To avoid this, time advance feature is used in GSM networks.
Similarly in CDMA networks, within a cell, mobiles are different radial distances from the base station. If all the mobiles transmit at equal power, the level received at the base station differs from one mobile to another. Mobiles that are nearer are received at significantly high power than the mobiles that are farther away. Because the transmission loss is higher for mobiles farther from the base station, mobiles near to the base station can cause more interference to the mobiles. Introducing power control during the call in the CDMA networks solves this problem. 
 
 

Evolution to 3G

The diagram below shows the 3G evolution paths taken by each network. 
 
 Here is a brief summary changes for the evolution of each network. 
 
  
GSM to GPRS:
New additions: Packet core network nodes – SGSN and GGSN.
Modifications: BSC hardware and software
No Changes:  Circuit core network (MSC, HLR, AuC), Air Interface (MS-BTS) and A-Interface (BSC-MSC)
The diagram below shows a 2.5 G GSM – GPRS network. 
 
 
   GSM /GPRS to UMTS:
New additions: WCDMA Air Interface (UE-Node B), RAN Interfaces, Iub (Node B – RNC), IuR (RNC- RNC), CN Interface Iu (MSC- RNC & SGSN – RNC)
Modifications: MSC and SGSN for Iu Interface.
No Changes:  Circuit core network (HLR, AuC), Packet Core Network (GGSN) 
The diagram below shows the UMTS network. 
 
 IS 95 to CDMA 2000:
New additions: Packet core network (PDSN, AAA, HA/FA), New Interface R-P (PDSN – BSS)
Modifications: Air Interface (MS-BSS), Network Interface (BSC- MSC)
No Changes:  Circuit core network (HLR, AC)  
The diagram below shows the CDMA2000 network. 

Conclusion:

This paper tried to capture the technical differences between the world's two biggest wireless networks – GSM and CDMA. From the practical deployment point of view GMS captured Europe, Asia and Africa where as CDMA has been deployed in the Americas and some parts of Asia like Japan and Korea. 
 
 
 Reference: 
  • http://www.arcx.com/sites/index.htm
  • GSM Wireless Networks – Nortel Networks Training Division
  • IS –95 Overview – Award Solutions
  • www.gmsworld.com
  • The GSM systems for Mobile Communications – Michel Mouly
  • Introduction to 3G Mobile Communications – Juha
 
 
 
 
 
DIFFERENCE BETWEEN CDMA VS TDMA
 
Last Updated: 15-Apr-2004
NOTE: During this discussion I will use the generic term "CDMA" to refer to the IS-95B standard. Technically speaking, CDMA is only a means to transmit bits of information, while IS-95B is a transmission protocol that employs CDMA. You may also hear the term "TDMA" used to refer generically to the IS-136 standard. Once again, TDMA is only a method of transmitting bits, while IS-136 is a protocol that happens to employ TDMA.

I spend quite a bit of time reading the messages that flow through the various PCS newsgroups and forums on the Internet, and if one thing is abundantly clear, it is that people don't seem to know the true differences between CDMA and TDMA. And who could blame them? There is so much hype surrounding these two competing technologies that it is difficult for a regular PCS subscriber to know who is telling the truth.

I personally am NOT an RF engineer, nor do I work for any of the cellular or PCS companies. It is however my hobby to keep up with the latest developments in mobile communication (as this web site amply demonstrates). I would like to clear the air by interjecting my own spin on this debate. I hope that by the time you finish reading this editorial you will have a better understanding of the true strengths and weaknesses of both technologies.

The Basics

Let's begin by learning what these two acronyms stand for. TDMA stands for "Time Division Multiple Access", while CDMA stands for "Code Division Multiple Access". Three of the four words in each acronym are identical, since each technology essentially achieves the same goal, but by using different methods. Each strives to better utilize the radio spectrum by allowing multiple users to share the same physical channel. You heard that right. More than one person can carry on a conversation on the same frequency without causing interference. This is the magic of digital technology.

Where the two competing technologies differ is in the manner in which users share the common resource. TDMA does it by chopping up the channel into sequential time slices. Each user of the channel takes turns transmitting and receiving in a round-robin fashion. In reality, only one person is actually using the channel at any given moment, but he or she only uses it for short bursts. He then gives up the channel momentarily to allow the other users to have their turn. This is very similar to how a computer with just one processor can seem to run multiple applications simultaneously.

CDMA on the hand really does let everyone transmit at the same time. Conventional wisdom would lead you to believe that this is simply not possible. Using conventional modulation techniques, it most certainly is impossible. What makes CDMA work is a special type of digital modulation called "Spread Spectrum". This form of modulation takes the user's stream of bits and splatters them across a very wide channel in a pseudo-random fashion. The "pseudo" part is very important here, since the receiver must be able to undo the randomization in order to collect the bits together in a coherent order.

If you are still having trouble understanding the differences though, perhaps this analogy will help you. This my own version of an excellent analogy provided by Qualcomm:

Imagine a room full of people, all trying to carry on one-on-one conversations. In TDMA each couple takes turns talking. They keep their turns short by saying only one sentence at a time. As there is never more than one person speaking in the room at any given moment, no one has to worry about being heard over the background din. In CDMA each couple talks at the same time, but they all use a different language. Because none of the listeners understand any language other than that of the individual to whom they are listening, the background din doesn't cause any real problem.

Voice Encoding

At this point many people confuse two distinctly different issues involved in the transmission of digital audio. The first is the WAY in which the stream of bits is delivered from one end to the other. This part of the "air interface" is what makes one technology different from another. The second is the compression algorithm used to squeeze the audio into as small a stream of bits as possible.

This latter component is known at the "Voice Coder", or Vocoder for short. Another term commonly used is CODEC, which is a similar word to modem. It combines the terms "COder" and "DECoder". Although each technology has chosen their own unique CODECs, there is no rule saying that one transmission method needs to use a specific CODEC. People often lump a technology's transmission method with its CODEC as though they were single entities. We will discuss CODECs in greater detail later on in this article.

Voice encoding schemes differ slightly in their approach to the problem. Because of this, certain types of human voice work better with some CODECs than they do with others. The point to remember is that all PCS CODECs are compromises of some sort. Since human voices have such a fantastic range of pitch and tonal depth, one cannot expect any single compromise to handle each one equally well. This inability to cope with all types of voice at the same level does lead some people to choose one technology over another.

All of the PCS technologies try to minimize battery consumption during calls by keeping the transmission of unnecessary data to a minimum. The phone decides whether or not you are presently speaking, or if the sound it hears is just background noise. If the phone determines that there is no intelligent data to transmit it blanks the audio and it reduces the transmitter duty cycle (in the case of TDMA) or the number of transmitted bits (in the case of CDMA). When the audio is blanked your caller would suddenly find themselves listening to "dead air", and this may cause them to think the call has dropped.

To avoid this psychological problem many service providers insert what is known as "Comfort Noise" during the blanked periods. Comfort Noise is synthesized white noise that tries to mimic the volume and structure of the real background noise. This fake background noise assures the caller that the connection is alive and well.
However, in newer CODECs such as EVRC (used exclusively on CDMA systems) the background noise is generally suppressed even while the user is talking. This piece of magic makes it sound as though the cell phone user is not in a noisy environment at all. Under these conditions, Comfort Noise is neither necessary, nor desirable. You can read my article on EVRC by clicking here.

CDMA

Now that we have a rudimentary understanding of the two technologies, let's try and examine what advantages they provide. We'll begin with CDMA, since this newer technology has created the greatest "buzz" in the mobile communications industry.

One of the terms you'll hear in conjunction with CDMA is "Soft Handoff". A handoff occurs in any cellular system when your call switches from one cell site to another as you travel. In all other technologies this handoff occurs when the network informs your phone of the new channel to which it must switch. The phone then stops receiving and transmitting on the old channel, and it commences transmitting and receiving on the new channel. It goes without saying that this is known as a "Hard Handoff".

In CDMA however, every site are on the SAME frequency. In order to begin listening to a new site the phone only needs to change the pseudo-random sequence it uses to decode the desired data from the jumble of bits sent for everyone else. While a call is in progress the network chooses two or more alternate sites that it feels are handoff candidates. It simultaneously broadcasts a copy of your call on each of these sites. Your phone can then pick and choose between the different sources for your call, and move between them whenever it feels like it. It can even combine the data received from two or more different sites to ease the transition from one to the other.

This arrangement therefore puts the phone in almost complete control of the handoff process. Such an arrangement should ensure that there is always a new site primed and ready to take over the call at a moment's notice. In theory, this should put an end to dropped calls and audio interruptions during the handoff process. In practice it works quite well, but dropped calls are still a fact of life in a mobile environment. However, CDMA rarely drops a call due to a failed handoff.

A big problem facing CDMA systems is channel pollution. This occurs when signals from too many base stations are present at the subscriber's phone, but none are dominant. When this situation occurs the audio quality degrades rapidly, even when the signal seem otherwise very strong. Pollution occurs frequently in densely populated urban environments where service providers must build many sites in close proximity. Channel pollution can also result from massive multipath problems caused by many tall buildings. Taming pollution is a tuning and system design issue. It is up to the service provider to reduce this phenomenon as much as possible.

In defense of CDMA however, I should point out that the new EVRC CODEC is far more robust than either of the earlier CODECs. Because of its increased robustness it provides much more consistent audio in the face of high frame error rates. EVRC is an 8 kilobit CODEC that provides audio quality that is almost as good to the older 13 kilobit CODEC. Since CDMA consumes only as much of the "ether" as a user talks, switching everyone to an 8 kilobit CODEC was an inevitable move.
Don't confuse EVRC with the old (and unlamented) 8 kilobit CODEC implemented in the early days of CDMA deployment. That CODEC was simply awful, and very few good things could be said about it. EVRC is a far more advanced compression algorithm that cleans up many of the stability problems inherent in the two older CODECs. The sound reproduction is slightly muddier than the 13 kilobit CODEC, but the improvement in stability makes up for this.

Supporters often cite capacity as one CDMA's biggest assets. Virtually no one disagrees that CDMA has a very high "spectral efficiency". It can accommodate more users per MHz of bandwidth than any other technology. What experts do not agree upon is by how much. Unlike other technologies, in which the capacity is fixed and easily computed, CDMA has what is known as "Soft Capacity". You can always add just one more caller to a CDMA channel, but once you get past a certain point you begin to pollute the channel such that it becomes difficult to retrieve an error-free data stream for any of the participants.

The ultimate capacity of a system is therefore dependent upon where you draw the line. How much degradation is a carrier willing to subject their subscribers to before they admit that they have run out of useable capacity? Even if someone does set a standard error rate at which these calculations are made, it does not mean that you personally will find the service particularly acceptable at that error rate.
TDMA

Let's move away from CDMA now and have a look at TDMA. Before we can go any further though, I should note that there are actually three different flavors of TDMA in the PCS market. Each of these technologies implements TDMA in a slightly different way. The most complex implementation is, without a doubt, GSM. It overlays the basic TDMA principles with many innovations that reduce the potential problems inherent in the system.

To reduce the effects of co-channel interference, multipath, and fading, the GSM network can use something known as Frequency Hopping. This means that your call literally jumps from one channel to another at fairly short intervals. By doing this the likelihood of a given RF problem is randomized, and the effects are far less noticeable to the end user. Frequency Hopping is always available, but not mandated. This means that your GSM provider may or may not use it.

iDEN is a proprietary Motorola technology that no other company seems to participate in. Only Motorola makes iDEN phones, and only Motorola makes iDEN infrastructure equipment. Perhaps the company guards its technology on purpose. iDEN was initially deployed as an alternative to standard packet radio systems commonly used by public safety and business users. However, it also provided phone interconnect services that are extinguishable from phone services offered by the other PCS systems, as well as packet data services for web browsing and hooking up your laptop to the Internet.
Finally there is the old IS-136 technology, but this is now an officially dead technology. All of the North American providers who used it (Rogers, Cingular, and AT&T) are abandoning it in favor of GSM. The same is happening in other parts of the world where IS-136 was used. I therefore will not spend much time talking about this variation of TDMA.

Each of these TDMA technologies uses a different CODEC. GSM sports a CODEC called EFR (short for Enhanced Full Rate). This CODEC is arguable the best sounding one available in the PCS world. IS-136 used to sound horrible, but in the fall of 1997 they replaced their old CODEC with a new one. This new CODEC sounds much better than the old, but it doesn't quite match the GSM and CDMA entries.

TDMA systems still rely on the switch to determine when to perform a handoff. Unlike the old analog system however, the switch does not do this in a vacuum. The TDMA handset constantly monitors the signals coming from other sites, and it reports this information to the switch without the caller being aware of it. The switch then uses this information to make better handoff choices at more appropriate times.
Perhaps the most annoying aspect of TDMA system to some people is the obviousness of handoffs. Some people don't tend to hear them, and I can only envy those individuals. Those of us who are sensitive to the slight interruptions caused by handoffs will probably find GSM the most frustrating. It's handoffs are by far the most messy. When handoffs occur infrequently (such as when we are stationary or in areas with few sites), they really don't present a problem at all. However, when they occur very frequently (while travelling in an area with a huge number of sites) they can become annoying.

Spectral Efficiency

Channel capacity in a TDMA system is fixed and indisputable. Each channel carries a finite number of "slots", and you can never accommodate a new caller once each of those slots is filled. Spectral efficiency varies from one technology to another, but computing a precise number is still a contentious issue. For example, GSM provides 8 slots in a channel 200 kHz wide, while iDEN provides 3 slots in a channel only 25 kHz wide. GSM therefore consumes 25 kHz per user, while IS-136 consumes only 8.333 kHz per user. When Direct Connect is used on iDEN, 6 users can be stuffed into a single channel, thus only 4.166 kHz is consumer per user. There is also a new 6:1 interconnect CODEC coming for iDEN which will allow 6 phone users per channel.

One would be sorely tempted to proclaim that iDEN has 3 to 6 times the capacity of GSM. In a one-cell system this is certainly true, but once we start deploying multiple cells and channel reuse the situation becomes more complex. Due to GSM's better error management and frequency hopping the interference of a co-channel site is greatly reduced. This allows frequencies to be reused more frequently without a degradation in the overall quality of the service.

Capacity is measured in "calls per cell per MHz". An GSM system using N=4 reuse (this means you have 4 different sets of frequencies to spread out around town) the figure is 5.0 We get an efficiency value of 6.6 fo