ВУЗ: Казахская Национальная Академия Искусств им. Т. Жургенова
Категория: Книга
Дисциплина: Не указана
Добавлен: 03.02.2019
Просмотров: 21689
Скачиваний: 19
6-26 Digital Coding of Audio Signals
6.3.3
Digital Coding
Two issues are fundamental in assessing the performance of a digital communication system [5]:
•
The reliability of the system in terms of accurately transmitting information from one point to
another
•
The rate at which information can be transmitted with an acceptable level of reliability
In an ideal communication system, information would be transmitted at an infinite rate with an
infinite level of reliability. In reality, however, fundamental limitations affect the performance of
any communication system. No physical system is capable of instantaneous response to changes,
and the range of frequencies that a system can reliably handle is limited. These real-world con-
siderations lead to the concept of bandwidth. In addition, random noise affects any signal being
transmitted through any communication medium. Finite bandwidth and additive random noise
are two fundamental limitations that prevent designers from achieving an infinite rate of trans-
mission with infinite reliability. Clearly, a compromise is necessary. What makes the situation
even more challenging is that the reliability and the rate of information transmission usually
work against each other. For a given system, a higher rate of transmission normally means a
lower degree of reliability, and vice versa. To favorably affect this balance, it is necessary to
improve the efficiency and the robustness of the communication system. Source coding and
channel coding are the means for accomplishing this task.
6.3.3a
Source Coding
Most information sources generate signals that contain redundancies [5]. For example, consider
a picture that is made up of pixels, each of which represents one of 256 grayness levels. If a fixed
coding scheme is used that assigns 8 binary digits to each pixel, a 100
× 100 picture of random
patterns and a 100
× 100 picture that consists of only white pixels would both be coded into the
same number of binary digits, although the white-pixel version would have significantly less
information than the random-pattern version.
One simple method of source encoding is the Huffman coding technique, which is based on
the idea of assigning a code word to each symbol of the source alphabet such that the length of
each code word is approximately equal to the amount of information conveyed by that symbol.
As a result, symbols with lower probabilities get longer code words. Huffman coding is achieved
through the following process:
•
List the source symbols in descending order of probabilities.
•
Assign a binary 0 and a binary 1, respectively, to the last two symbols in the list.
•
Combine the last two symbols in the list into a new symbol with its probability equal to the
sum of two symbol probabilities.
•
Reorder the list, and continue in this manner until only one symbol is left.
•
Trace the binary assignments in reverse order to obtain the code word for each symbol.
A tree diagram for decoding a coded sequence of symbols is shown in Figure 6.3.2. It can easily
be verified that the entropy of the source under consideration is 2.3382 bits/symbol, and the aver-
age code-word length using Huffman coding is 2.37 bits/symbol.
Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com)
Copyright © 2004 The McGraw-Hill Companies. All rights reserved.
Any use is subject to the Terms of Use as given at the website.
Digital Modulation
Digital Modulation 6-27
At this point it is appropriate to define entropy. In a general sense, entropy is a measure of the
disorder or randomness in a closed system. With regard to digital communications, it is defined
as a measure of the number of bits necessary to transmit a message as a function of the probabil-
ity that the message will consist of a specific set of symbols.
6.3.3b
Channel Coding
The previous section identified the need for removing redundancies from the message signal to
increase efficiency in transmission [5]. From an efficiency point of view, the ideal scenario
would be to obtain an average word length that is numerically equal to the entropy of the source.
From a practical perspective, however, this would make it impossible to detect or correct errors
that may occur during transmission. Some redundancy must be added to the signal in a con-
trolled manner to facilitate detection and correction of transmission errors. This process is
referred to as channel coding.
A variety of techniques exist for detection and correction of errors. For the purposes of this
chapter, however, it is sufficient to understand that error-correction coding is important to reli-
able digital transmission and that it adds to the total bit rate of a given information stream. For
closed systems, where retransmission of garbled data is possible, a minimum of error-correction
overhead is practical. The error-checking parity system is a familiar technique. However, for
transmission channels where 2-way communication is not possible, or the channel restrictions do
not permit retransmission of specific packets of data, robust error correction is a requirement.
More information on the basic principles of error correction can be found in [5].
6.3.3c
Error-Correction Coding
Digital modulation schemes in their basic form have dependency between signaling elements
over only one signaling division [1]. There are advantages, however, to providing memory over
several signaling elements from the standpoint of error correction. Historically, this has been
accomplished by adding redundant symbols for error correction to the encoded data, and then
Figure 6.3.2
The Huffman coding algorithm. (
From [5]. Used with permission.)
Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com)
Copyright © 2004 The McGraw-Hill Companies. All rights reserved.
Any use is subject to the Terms of Use as given at the website.
Digital Modulation
6-28 Digital Coding of Audio Signals
using the encoded symbol stream to modulate the carrier. The ratio of information symbols to
total encoded symbols is referred to as the code rate. At the receiver, demodulation is accom-
plished followed by decoding.
The drawback to this approach is that redundant symbols are added, requiring a larger trans-
mission bandwidth, assuming the same data throughput. However, the resulting signal is more
immune to channel-induced errors resulting from, among other things, a marginal S/N for the
channel. The end result for the system is a coding gain, defined as the ratio of the signal-to-noise
ratios without and with coding.
There are two widely used coding methods:
•
Block coding, a scheme that encodes the information symbols block-by-block by adding a
fixed number of error-correction symbols to a fixed block length of information symbols.
•
Convolutional coding, a scheme that encodes a sliding window of information symbols by
means of a shift register and two or more modulo-2 adders for the bits in the shift register that
are sampled to produce the encoded output.
Although an examination of these coding methods is beyond the scope of this chapter, note
that coding used in conjunction with modulation always expands the required transmission band-
width by the inverse of the code rate, assuming the overall bit rate is held constant. In other
words, the power efficiency goes up, but the bandwidth efficiency goes down with the use of a
well-designed code. Certain techniques have been developed to overcome this limitation, includ-
ing trellis-coded modulation (TCM), which is designed to simultaneously conserve power and
bandwidth [6].
6.3.4
Reference
1.
Ziemer, Rodger E.: “Digital Modulation,” The Electronics Handbook, Jerry C. Whitaker
(ed.), CRC Press, Boca Raton, Fla., pp. 1213–1236, 1996.
2.
Ziemer, R., and W. Tranter: Principles of Communications: Systems, Modulation, and
Noise, 4
th
ed., Wiley, New York, 1995.
3.
Peterson, R., R. Ziemer, and D. Borth: Introduction to Spread Spectrum Communications,
Prentice-Hall, Englewood Cliffs, N. J., 1995.
4.
Sklar, B.: Digital Communications: Fundamentals and Applications, Prentice-Hall, Engle-
wood Cliffs, N. J., 1988.
5.
Alkin, Oktay: “Digital Coding Schemes,” The Electronics Handbook, Jerry C. Whitaker
(ed.), CRC Press, Boca Raton, Fla., pp. 1252–1258, 1996.
6.
Ungerboeck, G.: “Trellis-Coded Modulation with Redundant Signal Sets,” parts I and II,
IEEE Comm. Mag., vol. 25 (Feb.), pp. 5-11 and 12-21, 1987.
Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com)
Copyright © 2004 The McGraw-Hill Companies. All rights reserved.
Any use is subject to the Terms of Use as given at the website.
Digital Modulation
6-29
Chapter
6.4
DSP Devices and Systems
Ken Polhmann
6.4.1
Introduction
To efficiently process digital signals, considerable computational power is required. The impres-
sive advancements in the performance of microprocessors intended for personal computer appli-
cations have enabled a host of new devices intended for communications systems. For receivers,
the most important of these is the digital signal processor (DSP), which is a class of processor
intended for a specific application or range of applications. The DSP is, in essence, a micropro-
cessor that sacrifices flexibility (or instruction set) for speed. There are a number of tradeoffs in
DSP design, however, with each new generation of devices, those constraints are minimized
while performance is improved.
6.4.2
Fundamentals of Digital Signal Processing
1
Digital signal processing is used to generate, analyze, or otherwise manipulate signals in the dig-
ital domain [1]. Digital processing of acquired waveforms offers several advantages over pro-
cessing of continuous-time signals. Fundamentally, the use of unambiguous discrete samples
promotes:
•
Use of components with lower tolerances.
•
Predetermined system accuracy.
•
Identically reproducible circuits.
•
Theoretically unlimited number of successive operations on a sample.
•
Reduced sensitivity to external effects such as noise, temperature, and aging.
1. Portions of this chapter were adapted from: Pohlmann, Ken: Principles of Digital Audio,
McGraw-Hill, New York, N.Y., 2000. Used with permission.
Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com)
Copyright © 2004 The McGraw-Hill Companies. All rights reserved.
Any use is subject to the Terms of Use as given at the website.
Source: Standard Handbook of Audio and Radio Engineering
6-30 Digital Coding of Audio Signals
The programmable nature of discrete-time signals permits changes in function without changes
in hardware. Some operations implemented with digital processing are difficult or impossible
using analog means. On the other hand, DSP also has certain disadvantages, including:
•
The technology always requires power; there is no passive form of DSP circuitry.
•
Digital representation of a signal requires a larger bandwidth than the corresponding analog
signal.
•
DSP technology can be expensive to develop.
•
Circuits capable of performing fast computation are required.
•
When used for analog applications, A/D and D/A conversion are required.
DSP presents rich possibilities for professional video and audio applications. Error correc-
tion, multiplexing, sample rate conversion, speech and music synthesis, data reduction and data
compression, filtering, adaptive equalization, dynamic compression and expansion, reverbera-
tion, ambience processing, time alignment, mixing and editing, encryption and watermarking,
and acoustical analysis can all be performed with digital signal processing.
6.4.2a
Discrete Systems
A discrete system is any system that accepts one or more discrete input signals x(n) and produces
one or more discrete output signals y(n) in accordance with a set of operating rules [1]. The input
and output discrete time signals are represented by a sequence of numbers. If an analog signal
x(t) is sampled every T seconds, the discrete time signal is x(nT), where n is an integer. Time can
be normalized so that the signal is written as x(n).
Linearity and time–invariance are two important criteria for discrete systems. A linear system
exhibits the property of superposition: the response of a linear system to a sum of signals is the
sum of the responses to each individual input. That is, the input x
1
(n) + x
2
(n) yields the output
y
1
(n) + y
2
(n). A linear system exhibits the property of homogeneity: the amplitude of the output
of a linear system is proportional to that of the input. That is, an input ax(n) yields the output
ay(n). Combining these properties, a linear discrete system with the input signal ax
1
(n) + bx
2
(n)
produces an output signal ay
1
(n) + by
2
(n) where a and b are constants. The input signals are
treated independently, output amplitude is proportional to that of the input, and no new signal
components are introduced. As described in the following paragraphs, all z-transforms and Fou-
rier transforms are linear.
A discrete time system is time-invariant if the input signal x(n – k) produces an output signal
y(n – k) where k is an integer. In other words, a linear time-invariant discrete (LTD) system
behaves the same way at all times. For example, an input delayed by k samples generates an out-
put delayed by k samples.
A discrete system is causal if at any instant the output signal corresponding to any input sig-
nal is independent of the values of the input signal after that instant. In other words, there are no
output values before there has been an input signal. The output does not depend on future inputs.
6.4.2b
Impulse Response and Convolution
The impulse response h(t) gives a full description of a linear time-invariant discrete system in the
time domain [1]. A LTD system, like any discrete system, converts an input signal into an output
Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com)
Copyright © 2004 The McGraw-Hill Companies. All rights reserved.
Any use is subject to the Terms of Use as given at the website.
DSP Devices and Systems