ВУЗ: Казахская Национальная Академия Искусств им. Т. Жургенова
Категория: Книга
Дисциплина: Не указана
Добавлен: 03.02.2019
Просмотров: 21678
Скачиваний: 19
6-1
Section
6
Digital Coding of Audio Signals
Digital signal processing (DSP) techniques are being applied to the implementation of various
stages of audio capture, processing, storage, and distribution systems for a number of reasons,
including:
•
Improved cost-performance considerations
•
Future product-enhancement capabilities
•
Greatly reduced alignment and testing requirements
A wide variety of video circuits and systems can be readily implemented using various degrees
of embedded DSP. The most important parameters are signal bandwidth and S/N, which define,
respectively, the required sampling rate and the effective number of bits required for the conver-
sion. Additional design considerations include the stability of the sampling clock, quadrature
channel matching, aperture uncertainty, and the cutoff frequency of the quantizer networks.
DSP devices differ from microprocessors in a number of ways. For one thing, microproces-
sors typically are built for a range of general-purpose functions and normally run large blocks of
software. Also, microprocessors usually are not called upon to perform real-time computation.
Typically, they are at liberty to shuffle workloads and to select an action branch, such as complet-
ing a printing job before responding to a new input command. The DSP, on the other hand, is
dedicated to a single task or small group of related tasks. In a sophisticated video system, one or
more DSPs may be employed as attached processors, assisting a general-purpose host micropro-
cessor that manages the front-panel controls or other key functions of the unit.
One convenient way to classify DSP devices and applications is by their dynamic range. In
this context, the dynamic range is the spread of numbers that must be processed in the course of
an application. It takes a certain range of values, for example, to describe a particular signal, and
that range often becomes even wider as calculations are performed on the input data. The DSP
must have the capability to handle such data without overflow.
The processor capacity is a function of its data width, i. e., the number of bits it manipulates
and the type of arithmetic that it performs (fixed or floating point). Floating point processing
manipulates numbers in a form similar to scientific notation, enabling the device to accommo-
date an enormous breadth of data. Fixed arithmetic processing, as the name implies, restricts the
processing capability of the device to a predefined value.
Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com)
Copyright © 2004 The McGraw-Hill Companies. All rights reserved.
Any use is subject to the Terms of Use as given at the website.
Source: Standard Handbook of Audio and Radio Engineering
6-2 Section Six
Recent advancements in very large scale integration (VLSI) technologies in general, and DSP
in particular, have permitted the integration of many video system functional blocks into a single
device. Such designs typically offer excellent performance because of the elimination of the tra-
ditional interfaces required by discrete designs. This high level of integration also decreases the
total parts count of the system, thereby increasing the overall reliability of the system.
The trend toward DSP operational blocks in video equipment of all types is perhaps the single
most important driving force in video hardware today. It has reshaped products as diverse as
cameras and displays. Thanks in no small part to research and development efforts in the com-
puter industry, the impact is just now being felt in the television business.
In This Section:
Chapter 6.1: Analog/Digital Signal Conversion
6-5
Introduction
6-5
The Nyquist Limit and Aliasing
6-5
The A/D Conversion Process
6-6
Successive Approximation
6-9
Parallel/Flash
6-9
The D/A Conversion Process
6-11
Practical Implementation
6-12
Converter Performance Criteria
6-12
References
6-14
Chapter 6.2: Digital Filters
6-15
Introduction
6-15
FIR Filters
6-15
Design Techniques
6-18
Applications
6-18
Finite Wordlength Effects
6-19
Infinite Impulse Response Filters
6-20
Reference
6-22
Chapter 6.3: Digital Modulation
6-23
Introduction
6-23
Digital Modulaton Techniques
6-23
QPSK
6-24
Signal Analysis
6-25
Digital Coding
6-26
Source Coding
6-26
Channel Coding
6-27
Error-Correction Coding
6-27
Reference
6-28
Chapter 6.4: DSP Devices and Systems
6-29
Introduction
6-29
Fundamentals of Digital Signal Processing
6-29
Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com)
Copyright © 2004 The McGraw-Hill Companies. All rights reserved.
Any use is subject to the Terms of Use as given at the website.
Digital Coding of Audio Signals
Digital Coding of Audio Signals 6-3
Discrete Systems
6-30
Impulse Response and Convolution
6-30
Complex Numbers
6-34
Mathematical Transforms
6-35
Unit Circle and Region of Convergence
6-39
Poles and Zeros
6-39
DSP Elements
6-41
Sources of Errors
6-42
DSP Integrated Circuits
6-43
DSP Applications
6-44
Digital Delay
6-44
Example DSP Device
6-46
Functional Overview
6-47
References
6-50
On the CD-ROM
•
“Digital Audio” by P. Jeffrey Bloom, et. al., an archive chapter from the first edition of the
Audio Engineering Handbook. This reference material explains the fundamental elements of
digital audio coding, storage, and manipulation.
Reference Documents for this Section
Alkin, Oktay: “Digital Coding Schemes,” The Electronics Handbook, Jerry C. Whitaker (ed.),
CRC Press, Boca Raton, Fla., pp. 1252–1258, 1996.
Benson, K. B., and D. G. Fink: “Digital Operations in Video Systems,” HDTV: Advanced Televi-
sion for the 1990s, McGraw-Hill, New York, pp. 4.1–4.8, 1990.
Chambers, J. A., S. Tantaratana, and B. W. Bomar: “Digital Filters,” The Electronics Handbook,
Jerry C. Whitaker (ed.), CRC Press, Boca Raton, Fla., pp. 749–772, 1996.
Garrod, Susan A. R.: “D/A and A/D Converters,” The Electronics Handbook, Jerry C. Whitaker
(ed.), CRC Press, Boca Raton, Fla., pp. 723–730, 1996.
Garrod, Susan, and R. Borns: Digital Logic: Analysis, Application, and Design, Saunders Col-
lege Publishing, Philadelphia, 1991.
Lee, E. A., and D. G. Messerschmitt: Digital Communications, 2nd ed., Kluwer, Norell, Mass.,
1994.
Nyquist, H.: “Certain Factors Affecting Telegraph Speed,” Bell System Tech. J., vol. 3, pp. 324–
346, March 1924.
Parks, T. W., and J. H. McClellan: “A Program for the Design of Linear Phase Infinite Impulse
Response Filters,” IEEE Trans. Audio Electroacoustics, AU-20(3), pp. 195–199, 1972.
Peterson, R., R. Ziemer, and D. Borth: Introduction to Spread Spectrum Communications, Pren-
tice-Hall, Englewood Cliffs, N. J., 1995.
Pohlmann, Ken: Principles of Digital Audio, McGraw-Hill, New York, N.Y., 2000.
Sklar, B.: Digital Communications: Fundamentals and Applications, Prentice-Hall, Englewood
Cliffs, N. J., 1988.
Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com)
Copyright © 2004 The McGraw-Hill Companies. All rights reserved.
Any use is subject to the Terms of Use as given at the website.
Digital Coding of Audio Signals
6-4 Section Six
TMS320C55x DSP Functional Overview, Texas Instruments, Dallas, TX, literature No.
SRPU312, June 2000.
Ungerboeck, G.: “Trellis-Coded Modulation with Redundant Signal Sets,” parts I and II, IEEE
Comm. Mag., vol. 25 (Feb.), pp. 5-11 and 12-21, 1987.
Ziemer, R., and W. Tranter: Principles of Communications: Systems, Modulation, and Noise, 4th
ed., Wiley, New York, 1995.
Ziemer, Rodger E.: “Digital Modulation,” The Electronics Handbook, Jerry C. Whitaker (ed.),
CRC Press, Boca Raton, Fla., pp. 1213–1236, 1996.
Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com)
Copyright © 2004 The McGraw-Hill Companies. All rights reserved.
Any use is subject to the Terms of Use as given at the website.
Digital Coding of Audio Signals
6-5
Chapter
6.1
Analog/Digital Signal Conversion
Susan A. R. Garrod, K. Blair Benson, Donald G. Fink
Jerry C. Whitaker, Editor-in-Chief
6.1.1
Introduction
Analog-to-digital conversion (A/D) is the process of converting a continuous range of analog sig-
nals into specific digital codes. Such conversion is necessary to interface analog pickup elements
and systems with digital devices and systems that process, store, interpret, transport, and manip-
ulate the analog values. Analog-to-digital conversion is not an exact process; the comparison
between the analog sample and a reference voltage is uncertain by the amount of the difference
between one reference voltage and the next [1]. The uncertainty amounts to plus or minus one-
half that difference. When words of 8 bits are used, this uncertainty occurs in essentially random
fashion, so its effect is equivalent to the introduction of random noise (quantization noise). For-
tunately, such noise is not prominent in the analog signal derived from the digital version. For
example, in 8-bit digitization of the NTSC 4.2 MHz baseband at 13.5 megasamples per second
(MS/s), the quantization noise is about 60 dB below the peak-to-peak signal level, far lower than
the noise typically present in the analog signal from the camera.
6.1.2
The Nyquist Limit and Aliasing
A critical rule must be observed in sampling an analog signal if it is to be reproduced without
spurious effects known as aliasing. The rule, first described by Nyquist in 1924 [2], states that
the time between samples must be short compared with the rates of change of the analog wave-
form. In video terms, the sampling rate in megasamples per second must be at least twice the
maximum frequency in megahertz of the analog signal. Thus, the 4.2 MHz maximum bandwidth
in the luminance spectrum of the NTSC baseband requires that the NTSC signal be sampled at
8.4 MS/s or greater. Conversely, the 13.5 MS/s rate specified in the ITU-R studio digital standard
Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com)
Copyright © 2004 The McGraw-Hill Companies. All rights reserved.
Any use is subject to the Terms of Use as given at the website.
Source: Standard Handbook of Audio and Radio Engineering