2. UNIT I SAMPLING & QUANTIZATION
Low pass sampling – Aliasing- Signal Reconstruction-Quantization -
Uniform & non-uniform quantization - quantization noise - Logarithmic
Companding of speech signal- PCM - TDM
UNIT II WAVEFORM CODING
Prediction filtering and DPCM - Delta Modulation - ADPCM & ADM
principles-Linear Predictive Coding
UNIT III BASEBAND TRANSMISSION
Properties of Line codes- Power Spectral Density of Unipolar / Polar RZ
& NRZ – Bipolar NRZ - Manchester- ISI – Nyquist criterion for
distortionless transmission – Pulse shaping – Correlative coding - Mary
schemes – Eye pattern – Equalization
UNIT IV DIGITAL MODULATION SCHEME
Geometric Representation of signals - Generation, detection, PSD &
BER of Coherent BPSK, BFSK & QPSK - QAM - Carrier Synchronization
- structure of Non-coherent Receivers - Principle of DPSK.
UNIT V ERROR CONTROL CODING
Channel coding theorem - Linear Block codes - Hamming codes - Cyclic
codes - Convolutional codes - Vitterbi Decoder
EC6501 DIGITAL COMMUNICATION
8/24/20182
3. Course Outcomes
Highest
Cognitive
Level
C301.1 Describe the concepts of sampling and quantization K2
C301.2 Compare the various source coding techniques K2
C301.3
Illustrate the different modulation schemes and
equalization techniques
K2
C301.4 Describe the baseband transmission schemes K2
C301.5
Examine the PSD and BER of various modulation
schemes
K3
C301.6 Generate different error control codes K3
8/24/20183
4. 8/24/20184
What are K-levels?
A K-level, or Cognitive level, is used to classify learning
objectives according to the revised taxonomy from Bloom.
The Foundation and Advanced exams cover four
different K-levels (K1 to K4):
K1 (Remember) = The candidate should remember or
recognize a term or a concept.
K2 (Understand) = The candidate should select an
explanation for a statement related to the question topic.
K3 (Apply) = The candidate should select the correct
application of a concept or technique and apply it to a given
context.
K4 (Analyze) = The candidate can separate information
related to a procedure or technique into its constituent parts
for better understanding and can distinguish between facts
5. 8/24/20185
The Expert level exams include five different K-levels
(K2 to K6), with the two additional higher K-levels:
K5 (Evaluate) = The candidate may make judgments
based on criteria and standards. He detects
inconsistencies or fallacies within a process or product,
determines whether a process or product has internal
consistency and detects the effectiveness of a procedure
as it is being implemented.
K6 (Create) = The candidate puts elements together to
form coherent or functional whole. Typical application is to
reorganize elements into a new pattern or structure,
devise a procedure for accomplishing some task or invent
a product.
6.
7. Introduction to Waveform Coding
In continuous-wave (CW) modulation, which was studied in CT
some parameter of a sinusoidal carrier wave is varied
continuously in accordance with the message signal. This is in
direct contrast to pulse modulation, which we study in this unit.
In pulse modulation, some parameter of a pulse train is
varied in accordance with the message signal. On this basis,
we may distinguish two families of pulse modulation:
1. Analog pulse modulation, in which a periodic pulse train is
used as the carrier wave and some characteristic feature of
each pulse (e.g., amplitude, duration, or position) is varied in
a continuous manner in accordance with the corresponding
sample value of the message signal. Thus, in analog pulse
modulation, information is transmitted basically in analog form
but the transmission takes place at discrete times.
8.
9. Introduction to Waveform Coding
Digital pulse modulation, in which the message
signal is represented in a form that is discrete in
both time and amplitude, thereby permitting
transmission of the message in digital form as a
sequence of coded pulses; this form of signal
transmission has no CW counterpart.
10.
11.
12.
13. There is no free lunch
For every gain we make, there is a price to pay
DPCM which exploits the use of lossy data
compression to remove the redundancy inherent in a
message signal, such as voice or video, so as to reduce
the bit rate of the transmitted data without serious
degradation in overall system response.
In effect, increased system complexity is traded off for
reduced bit rate, therefore reducing the bandwidth
requirement of PCM.
Differential Pulse Code Modulation
(DPCM)
14.
15.
16. Delta modulation (DM)
Delta modulation (DM), which addresses another
practical limitation of PCM: the need for simplicity of
implementation when it is a necessary requirement.
DM satisfies this requirement by intentionally
“oversampling” the message signal.
In effect, increased transmission bandwidth is traded
off for reduced system complexity. DM may
therefore be viewed as the dual of DPCM.
17. PCM, DPCM and DM
1. PCM is robust but demanding in both
transmission bandwidth and computational
requirements.
2. DPCM, which provides a method for the
reduction in transmission bandwidth but at the
expense of increased computational complexity.
3. DM, which is relatively simple to implement but
requires a significant increase in transmission
bandwidth.
18. In the use of PCM for the digitization of voice or video, the
signal is sampled at a rate slightly higher than the Nyquist
rate.
The resulting sampled signal is then found to exhibit a high
correlation between adjacent samples.
The meaning of this high correlation is that the signal does
not change rapidly from one sample to the next with the result
that the difference between samples has a variance that is
smaller than the variance of the signal itself.
When these high correlated samples are encoded as in PCM,
the resulting encoded signal contains redundant information.
By removing this redundancy before encoding , we obtain a
more efficient coded signal.
Differential Pulse Code Modulation
(DPCM)
19. If we know the past behaviour of the signal up to a
certain point in time, it is possible to make some
inference about its future values.
Assume that the baseband signal x(t) is sampled at a
rate fs=1/Ts, to produce a sequence of correlated
samples Ts, seconds apart.
Let the sequence be denoted by {x(nTs)}, where n
takes on integer values.
The fact that it is possible to predict future values of
the signal x(t) provides motivation for the differential
quantization.
Differential Pulse Code Modulation
(DPCM)
20.
21. ^
^ x(nTs) is the sampled input
x (nTs) is the predicted sample^
e(nTs) is the difference of sampled input and
predicted output, often called as prediction error
v(nTs) is the quantized output
u(nTs) is the predictor input which is actually
the summer output of the predictor output and
the quantizer output
The predictor produces the assumed
samples from the previous outputs of
the transmitter circuit.
Quantizer Output is represented
as −
v(nTs)=Q[e(nTs)]
=e(nTs)+q(nTs)
Where q (nTs) is the quantization
error
22. Differential Pulse Code Modulation
(DPCM)
Quantized version of
the input signal x(nTs)
The quantized signal u(nTs) at the predictor input
differs from the original input signal x(nTs) by the
quantization error.
If the prediction is good, the variance of the prediction
error e(nTs) will be smaller than the variance of
23. Differential Pulse Code Modulation
(DPCM)
Gp – Prediction
gain
Gp>1 represents gain in SNR due to DPCM. For a
baseband signal, the variance of signal is fixed, so
that Gp is maximized by minimizing variance of
prediction error. Accordingly our objective should
be to design the predictor so as to minimize
error variance.
24. Delta Modulation (DM)
The exploitation of signal correlations in DPCM
suggests the possibility oversampling a
baseband signal (i.e., at a rate much higher than
Nyquist rate) purposely to increase the
correlations between adjacent samples of the
signal, so as to permit the use of a simple
quantizing strategy for constructing the encoded
signal.
Delta modulation (DM), which is the one-bit (or
two level) version of DPCM, is precisely such a
scheme.
In its basic form, DM provided a staircase
26. Delta Modulation (DM)
The difference between the input and approximation is quantized
into only two levels, namely ± , corresponding to positive and
negative differences, respectively.
If the approximation falls
below the signal at any
sampling time, it is
increased by , or else it
is diminished by .
denotes the absolute
value of the two
representation levels of
the one bit quantizer
used in the DM.
Step size ∆
of quantizer
is related to
by
∆ = 2
27.
28. Delta Modulation (DM)
e(nTs) is a prediction error representing the
difference between the present sample value of
the input signal and the latest approximation of
it.
The binary quantity b(nTs) is the algebraic sign
of the error, except for the scaling factor .
Indeed, b(nTs) is the one bit word transmitted by
the DM system.
DM Modulator consists of a summer, two level
quantizer, and an accumulator. Accumulator is
initially set to zero and by solving equations, we
get accumulator result and result.
29. Delta Modulation (DM)
At each sampling instant, the accumulator increments the
approximation to the input signal by ± , depending upon the
binary output of the modulator.
In the receiver the staircase approximation u(t) is reconstructed
reconstructed by passing the incoming sequence of positive
positive and negative pulses through an accumulator.
DM Features
A one bit codeword for the output, which eliminate the needs
for word framing
Simplicity of design for both transmitter and receiver
30. Delta Modulation (DM)
QUANTIZATION
NOISE
Delta modulation systems are subject to two types of
quantization error (1) slope- overload distortion and (2)
granular noise. Let q(nTs) denote the quantization error.
Except for quantization error, the quantizer input is a first
backward difference of the input signal, which may be viewed
as a digital approximation to the derivative of the input signal.
In order for the sequence of samples {u(nTs)} to increase as
fast as the input sequence of samples {x(nTs)} is a region of
maximum slope of x(t).
SLOPE- OVERLOAD DISTORTION
31. Delta Modulation (DM)
Quantization
Noise
Otherwise, the step size ∆=2 is too small for the staircase
approximation u(t) to follow a steep segment of the input waveform
waveform x(t), with the result that u(t) falls behind x(t). This
condition is called slope-overload and the resulting quantization error
quantization error is called slope overload distortion error/noise.
Since the maximum slope of the staircase approximation u(t) is fixed
by the step size ∆, increases or decreases in u(t) tend to occur along
straight lines. For this reason, delta modulator using a fixed step size
is referred to as a linear delta modulator (LDM).
32.
33. Delta Modulation (DM)
It occurs when the step size ∆ is too large relative to the local slope
characteristics of the input waveform x(t), thereby causing the
staircase approximation u(t) to hunt around a relatively flat segment
segment of the input waveform.
So there is a need to have a large step size to accommodate a wide
dynamic range, whereas a small step size is required for the accurate
representation of relatively low level signals.
The optimum step size that minimizes the mean square value of the
quantizing error in a linear delta modulator will be the result of a
compromise between slope overload distortion and granular noise.
GRANULAR NOISE
34.
35. Delta Modulation (DM)
We consider the effect of quantization noise under the simplifying
assumption of no slope overload. We assume the use of sinusoidal
modulation,
Maximum Output SNR for Sinusoidal Modulation
Maximum slope of signal x(t) is given by,
Maximum permissible value of the
output signal power equals,
36. Delta Modulation (DM)
When there is no slope overload, the maximum quantization error is
± . We assume that the quantization error is uniformly distributed.
Maximum Output SNR for Sinusoidal Modulation
Average output noise power is,
The maximum SNR of a DM is proportional to the sampling rate
cubed. Indicates 9 dB improvement with doubling of the
sampling rate.
37. By comparison, in the case of standard PCM, if we double the bit
rate by doubling the number of bits per sample, we achieve a 6
dB increase in SNR for each added bit.
For example, by doubling the bit rate from 40 to 80 kbits/sec, the
SNR is increased by 9 dB using DM.
On the other hand, if PCM is employed and the bit rate is doubled
by increasing the number of bits per sample from 5 to 10, the
SNR is increased by 30 dB. The increase of SNR with it rate is
much more dramatic for PCM than for DM.
Delta Modulation (DM)
38.
39. Adaptive Delta Modulation (ADM)
NEED FOR ADAPTIVE DELTA MODULATION
To overcome the quantization errors due to slope
overload and granular noise, the step size (∆) is made
adaptive to variations in the input signal x(t).
In the steep segment of the signal, the step size is
increased.
On the other hand if the input is varying slowly, the
step size is reduced. This method is known as
Adaptive Delta Modulation (ADM)
The adaptive delta modulators can take continuous
changes in step size or discrete changes in step
size.
41. Adaptive Delta Modulation (ADM)
ADM TRANSMITTER
The logic for step size control is added in the diagram.
The step size increases or decreases according to a
specified rule depending on one bit quantizer output.
As an example, if one bit quantizer output is high
(i.e., 1), then step size may be doubled for next
sample.
If one bit quantizer output is low, then step size may
be reduced by one step.
Figure in next slide shows, the staircase waveforms of
ADM and the sequence of bits to be transmitted.
43. Adaptive Delta Modulation (ADM)
ADM RECEIVER
In the receiver of adaptive delta modulator, there are
two portions. The first portion produces the step size
from each incoming bit.
The previous input and present input decides the step
size. It is then applied to an accumulator which builds
up staircase waveform.
The low pass filter then smoothens out the staircase
waveform to reconstruct the original signal.
45. Adaptive Delta Modulation (ADM)
COMMON ALGORITHM FOLLOWED
When three consecutive 1’s or 0’s occur, the step size
is increased or decreased by the factor of 1.5.
When an alternative sequence of 1’s and 0’s occur,
the step size is minimized.
46. Adaptive Delta Modulation (ADM)
ADVANTAGES OF ADAPTIVE DELTA MODULATION
The signal to noise ratio becomes better than ordinary
delta modulation because of the reduction in slope
overload distortion and idle noise
Because of the variable step size, the dynamic range of
ADM is wider than simple DM.
Utilization of bandwidth is better than delta
modulation
47. Coding Speech at Low Bit Rates
The use of PCM at the standard rate of 64 kbps
demands a high channel bandwidth for its
transmission.
In certain applications, channel bandwidth is at a
premium, in which there is a definite need for speech
coding at low bit rates, while maintaining acceptable
fidelity or quality of reproduction.
A major motivation for bit rate reduction is for secure
communication over radio channels that are
inherently of low capacity.
The fundamental limits on bit rate suggested by
speech perception and information theory show that
high quality speech coding is possible at rates
considerably less than 64 kbps (as low as 2 kbps).
48. Coding Speech at Low Bit Rates
The price that has to be paid for attaining the
advantage is increased processing complexity (and
increased cost of implementation).
In many coding schemes, increased complexity
translates into increased processing delay time.
Delay is of no concern in applications that involve
voice storage as in “voice mail”
For coding speech at low bit rates, a waveform coder
is optimized by exploiting both statistical
characterization of speech waveforms and properties
of hearing.
49. Coding Speech at Low Bit Rates
The design philosophy has two aims in mind;
To remove redundancies from the speech signal as far
as possible
To assign the available bits to code the non-redundant
parts of the speech signal in a perceptually efficient
manner.
To reduce the bit rate from 64 kbps to 32, 16, 8 and 4
kbps, the algorithms for redundancy removal and bit
assignment become increasingly more sophisticated.
As a rule of thumb, in the 64 to 8 kbps range, the
computational complexity required to code speech
increases by an order of magnitude when the bit rate
is halved, for approximately equal speech quality.
50. Adaptive differential PCM
(ADPCM)
Savings of bandwidth is possible by varying the
number of bits used for the difference signal
depending on its amplitude (fewer bits to encode
smaller difference signals)
An international standard for this is defined in ITU-T
recommendation G721
This is based on the same principle as the DPCM
except a predictor is used and the number of bits used
to quantize each difference is varied
This can be either 6 bits – producing 32 kbps – to
obtain a better quality output than with third order
DPCM, or 5 bits- producing 16 kbps – if lower
bandwidth is more important
51. Adaptive differential PCM
(ADPCM)
A second ADPCM standard which is a derivative of G-721 is
defined in ITU-T Recommendation G-722
This uses sub-band coding in which the input signal prior to
sampling is passed through two filters: one which passes only
signal frequencies in the range 50Hz through to 3.5kHz and
the other only frequencies in the range 3.5kHz through to
7kHz
By doing this the input signal is divided into two separate
equal-bandwidth signals, the first known as the lower sub-
band signal and the second the upper sub-band signal
Each is then sampled and encoded independently using
ADPCM, the sampling rate of the upper sub-band signal being
16 ksps to allow for the presence of the higher frequency
components in this sub-band
53. Adaptive differential PCM
(ADPCM)
The use of two sub-bands has the advantage that
different bit rates can be used for each
In general the frequency components in the lower
sub-band have a higher perceptual importance than
those in the higher sub-band
For example with a bit rate of 64 kbps the lower
sub-band is ADPCM encoded at 48kbps and the
upper sub-band at 16kbps
The two bitstreams are then multiplexed together to
produce the transmitted (64 kbps) signal – in such a
way that the decoder in the receiver is able to divide
them back again into two separate streams for
decoding
54. Adaptive Differential PCM(ADPCM)
NEED FOR ADPCM
The aim of all variants of PCM is to reduce the number
of bits used in the encoding process by removing
redundancies.
ADPCM is a scheme which permits the coding of speech
signals at 32 kbps through the combined use of
adaptive quantization and prediction.
The adaptive quantizer works with varying step size
∆(nTs) where Ts is the sampling period.
The step size is varied with respect to the variance
σ2x of the input signal x(nTs) that is defined by
55. Adaptive Differential PCM(ADPCM)
The estimate of is computed by the following methods
Adaptive Quantization with Forward Estimation (AQF)
Unquantized samples of the input signal used to derive
forward estimates of
Adaptive Quantization with Backward Estimation (AQB)
Samples of the quantizer output are used to derive
backward estimates of
56. Adaptive Quantization with Forward Estimation (AQF)
Adaptive Quantization with Forward Estimation (AQF)
Unquantized samples of the input signal used to derive
forward estimates of
57. Adaptive Differential PCM(ADPCM)
Adaptive Quantization with Forward Estimation (AQF)
The AQF scheme first goes through a learning period by
buffering unquantized samples of the input speech
signal.
The samples are released after the estimate is
obtained. The estimate is obviously independent of
quantizing noise.
The step size obtained from AQF requires the
explicit transmission of level information (typically
about 5 to 6 bits per step size sample)to a remote
decoder, thereby burdening the system with additional
side information that has to be transmitted to the
receiver.
58. Adaptive Differential PCM(ADPCM)
Adaptive Quantization with Forward Estimation (AQF)
A processing delay (on the order of 16 ms for speech) in
the encoding operation results from the use of AQF is
unacceptable in some applications.
The following 3 problems of AQF can be avoided in AQB
• Level transmission
• Buffering
• Delay
59. Adaptive Differential PCM(ADPCM)
Adaptive Quantization with Backward Estimation (AQB)
The following problems of AQF can be avoided in AQB
by using the recent history of quantizer output to
extract information for the computation of the step size.
• Level transmission, Buffering & Delay
Adaptive Quantization with Backward Estimation (AQB)
60. Adaptive Differential PCM(ADPCM)
Adaptive Prediction
Speech signals are inherently non-stationary and the
ACF and PSD of speech signals are time varying
functions of their respective variables.
This implies that the design of predictors for such
inputs also be time varying, that is adaptive.
As with adaptive quantization, there are two schemes
for performing adaptive prediction:
• Adaptive prediction with forward estimation (APF), in which
unquantized samples of the input signals are used to derive
the estimates of the predictor coefficients
• Adaptive prediction with backward estimation (APB), in which
samples of the quantizer output and the prediction error are
used to derive estimates of the predictor coefficients.
61. Adaptive Differential PCM(ADPCM)
Adaptive Prediction
Adaptive prediction with forward estimation (APF)
In the APF scheme, N unquantized samples of the input
speech are first buffered and then released after
computation of M predictor coefficients that are
optimized for the buffered segment of input samples.
The choice of M is a compromise between an adequate
prediction gain and the acceptable amount of side
information.
Likewise, the choice of learning period or buffer length
N involves a compromise between the rate at which the
speech signal change and the rate at which information
on predictor coefficients must be updated and
transmitted to the receiver.
63. Adaptive Differential PCM(ADPCM)
Adaptive Prediction
Adaptive prediction with forward estimation (APF)
APF suffers from the same disadvantages as AQF as
follows and they are eliminated by using APB.
• Level transmission
• Buffering
• Delay
64. Adaptive Differential PCM(ADPCM)
Adaptive Prediction
Adaptive prediction with backward estimation (APB)
In this scheme, the optimum predictor coefficients are
estimated on the basis of quantized and transmitted
data, they can be updated as frequently as desired; for
example, from sample to sample and therefore APB is
the preferred method of prediction.
The box labeled “logic for adaptive prediction” in figure
is intended to represent the mechanism for updating
the predictor coefficients. Let y(nTs) be quantizer
output.
66. Adaptive Differential PCM(ADPCM)
Adaptive Prediction
Adaptive prediction with backward estimation (APB)
The sample value of predictor input is given by,
In the above equation, is the prediction of speech
input sample
y(nTs) is the prediction error as far the prediction
process is concerned.
68. Adaptive Sub-band Coding
(ASBC)
PCM and ADPCM are both time domain coders in that
speech signal is processed in the time domain as a
single full band signal.
Sub-band coders are frequency domain coders, in which
speech signal is divided into a number of sub-bands and
each one is encoded separately.
The coder is capable of digitizing speech at a rate of 16
kb/s with a quality comparable to that of 64 kb/s PCM.
To accomplish this performance, it exploits the quasi-
periodic nature of voiced speech and a characteristic of
the hearing mechanism known as noise masking.
69. Adaptive Sub-band Coding
(ASBC)
The periodicity of voiced speech permits pitch prediction
and therefore a further reduction in the level of the
prediction error that requires quantization, compared to the
DPCM without pitch prediction.
The number of bits per sample that needs to be transmitted
is greatly reduced without a serious degradation in speech
quality.
The number of bits per sample can be reduced further by
making use of the noise masking phenomenon in perception.
That is human ear does not perceive noise in a given
frequency band if the noise is about 15 dB below the signal
level in that band.
This means that a relatively large coding error can be
tolerated near formants and coding rate can be reduced.
70. A formant is a concentration of acoustic energy around a particular
frequency in the speech wave. There are several formants, each at a
different frequency, roughly one in each 1000Hz band. Or, to put it
differently, formants occur at roughly 1000Hz intervals. Each formant
corresponds to a resonance in the vocal tract. Formants can be seen
very clearly in a wideband spectrogram, where they are displayed as
dark bands. The arrows at F on this spectrogram point out six
instances of the lowest formant. The next formant occurs just
above these, between 1 and 2 KHz. Then the next is just
71.
72. Adaptive Sub-band Coding
(ASBC)
The adaptive sub-band coding scheme varies the assignment
of available bits to the various sub-bands dynamically in
accordance with the spectral content of the input speech
signal, thereby helping control the shape of the overall
quantizing noise spectrum as a function of frequency.
More representation levels are used for the lower frequency
bands where pitch and formant information have to be
preserved.
If however, high frequency energy is dominant in the input
speech signal, the scheme automatically assigns a larger
number of representation levels to the high frequency
components of the input.
The complexity of a 16 kb/s ASBC is typically 100 times that
of a 64 kb/s PCM coder for the same quality. There is a
processing delay of 25 ms in ASBC but no such delay in PCM.
73. Linear Predictive Coding (LPC)
Linear predictive coding involves the source simply
analyzing the audio waveform to determine a
selection of the perceptual features it contains.
With this type of coding the perceptual features of an
audio waveform are analysed first
These are then quantized and sent and the
destination uses them, together with a sound
synthesizer, to regenerate a sound that is
perceptually comparable with the source audio
signal
With this compression technique although the
speech can often sound synthetic high levels of
compressions can be achieved
74. Linear Predictive Coding (LPC)
In terms of speech, the three features which determine the
perception of a signal by the ear are its:
Pitch: this is closely related to the frequency of the signal.
This is important since ear is more sensitive to signals in
the range 2-5kHz
Period: this is the duration of the signal
Loudness: This is determined by the amount of energy in
the signal
The input speech waveform is first sampled and quantized at a
defined rate. A block of digitized samples – known as segment -
is then analysed to determine the various perceptual
parameters of the speech that it contains
The output of the encoder is a string of frames, one for each
segment
75.
76. Linear Predictive Coding (LPC)
Each frame contains fields for pitch and loudness – the
period determined by the sampling rate being used – a
notification of whether the signal is voiced (generated
through the vocal cords) or unvoiced (vocal cords are
opened) and a new set of computed modal coefficients.
77. Frequency and Pitch
The sensation of a frequency is commonly referred to
as the pitch of a sound.
A high pitch sound corresponds to a
high frequency sound wave and a low pitch sound
corresponds to a low frequency sound wave