ВУЗ: Казахская Национальная Академия Искусств им. Т. Жургенова
Категория: Книга
Дисциплина: Не указана
Добавлен: 03.02.2019
Просмотров: 21585
Скачиваний: 19
The Physical Nature of Hearing 1-71
1.4.4
Sound in Rooms: The General Case
Taking the broadest view of complex sound sources, we can consider the combination of real
sources and their reflected images as multiple sources. In this way, it is possible to deal with situ-
ations other than the special case of stereophonic reproduction.
1.4.4a
Precedence Effect and the Law of the First Wavefront
For well over 100 years it has been known that the first sound arrival dominates sound localiza-
ton. The phenomenon is known as the law of the first wavefront or the precedence effect. With
time delays between the first and second arrivals of less than about 1 ms we are in the realm of
simple summing localization. At longer delays the location of the auditory event is dictated by
the location of the source of the first sound, but the presence of the later arrival is indicated by a
distinctive timbre and a change in the spatial extent of the auditory event; it may be smeared
toward the source of the second sound. At still longer time delays the second event is perceived
as a discrete echo.
These interactions are physically complex, with many parametric variations possible. The
perceived effects are correspondingly complex, and—as a consequence—the literature on the
subject is extensive and not entirely unambiguous.
One of the best-known studies of the interaction of two sound events is that by Haas [37], who
was concerned with the perception and intelligibility of speech in rooms, especially where there
is sound reinforcement. He formed a number of conclusions, the most prominent of which is that
for delays in the range of 1 to 30 ms, the delayed sound can be up to 10 dB higher in level than
the direct sound before it is perceived as an echo. Within this range, there is an increase in loud-
ness of the speech accompanied by “a pleasant modification of the quality of the sound (and) an
apparent enlargement of the sound source.” Over a wide range of delays the second sound was
judged not to disturb the perception of speech, but this was found to depend on the syllabic rate.
This has come to be known as the Haas effect, although the term has been extensively misused
because of improper interpretation.
Examining the phenomenon more closely reveals a number of effects related to sound quality
and to the localization dominance of the first-arrived sound. In general, the precedence effect is
dependent on the presence of transient information in the sounds, but even this cannot prevent
some interference from reflections in rooms. Several researchers have noted that high frequen-
cies in delayed sounds were more disturbing than low frequencies, not only because of their rela-
tive audibility but because they were inclined to displace the localization. In fact, the situation in
rooms is so complicated that it is to be expected that interaural difference cues will frequently be
contradictory, depending on the frequency and temporal envelope of the sound. There are sug-
gestions that the hearing process deals with the problem by means of a running plausibility anal-
ysis that pieces together evidence from the eyes and ears [38]. That this is true for normal
listening where the sound sources are visible underlines the need in stereo reproduction to pro-
vide unambiguous directional cues for those auditory events that are intended to occupy specific
locations.
Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com)
Copyright © 2004 The McGraw-Hill Companies. All rights reserved.
Any use is subject to the Terms of Use as given at the website.
The Physical Nature of Hearing
1-72 Principles of Sound and Hearing
1.4.4b
Binaural Discrimination
The cocktail-party effect, in which it is demonstrably easier to carry on a conversation in a
crowded noisy room when listening with two ears than with one, is an example of binaural dis-
crimination. The spatial concentration that is possible with two ears has several other ramifica-
tions in audio. Reverberation is much less obtrusive in two-eared listening, as are certain effects
of isolated reflections that arrive from directions away from that of the direct sound. For exam-
ple, the timbral modifications that normally accompany the addition of a signal to a time-delayed
duplicate (comb filtering) are substantially reduced when the delayed component arrives at the
listener from a different direction [39]. This helps to explain the finding that listeners frequently
enjoy the spaciousness from lateral reflections without complaining about the coloration. In this
connection it has been observed that the disturbing effects of delayed sounds are reduced in the
presence of room reverberation [37] and that reverberation tends to reduce the ability of listeners
to discriminate differences in the timbre of sustained sounds like organ stops and vowels [20].
1.4.5
References
1.
Shaw, E. A. G.: “The Acoustics of the External Ear,” in W. D. Keidel and W. D. Neff (eds.),
Handbook of Sensory Physiology, vol. V/I, Auditory System, Springer-Verlag, Berlin, 1974.
2.
Fletcher, H., and W. A. Munson: “Loudness, Its Definition, Measurement and Calculation,”
J. Acoust. Soc. Am., vol. 5, pp. 82–108, 1933.
3.
Robinson, D. W., and R. S. Dadson: “A Redetermination of the Equal-Loudness Relations
for Pure Tones,” Br. J. Appl. Physics, vol. 7, pp. 166–181, 1956.
4.
International Organization for Standardization: Normal Equal-Loudness Contours for Pure
Tones and Normal Threshold for Hearing under Free Field Listening Conditions, Recom-
mendation R226, December 1961.
5.
Tonic, F. E.: “Loudness—Applications and Implications to Audio,” dB, Part 1, vol. 7, no. 5,
pp. 27–30; Part 2, vol. 7, no. 6, pp. 25–28, 1973.
6.
Scharf, B.: “Loudness,” in E. C. Carterette and M. P. Friedman (eds.), Handbook of Percep-
tion, vol. 4, Hearing, chapter 6, Academic, New York, N.Y., 1978.
7.
Jones, B. L., and E. L. Torick: “A New Loudness Indicator for Use in Broadcasting,” J.
SMPTE, Society of Motion Picture and Television Engineers, White Plains, N.Y., vol. 90,
pp. 772–777, 1981.
8.
International Electrotechnical Commission: Sound System Equipment, part 10, Programme
Level Meters, Publication 268-1 0A, 1978.
9.
Zwislocki, J. J.: “Masking—Experimental and Theoretical Aspects of Simultaneous, For-
ward, Backward and Central Masking,” in E. C. Carterette and M. P. Friedman (eds.),
Handbook of Perception, vol. 4, Hearing, chapter 8, Academic, New York, N.Y., 1978.
10.
Ward, W. D.: “Subjective Musical Pitch,” J. Acoust. Soc. Am., vol. 26, pp. 369–380, 1954.
11.
Backus, John: The Acoustical Foundations of Music, Norton, New York, N.Y., 1969.
Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com)
Copyright © 2004 The McGraw-Hill Companies. All rights reserved.
Any use is subject to the Terms of Use as given at the website.
The Physical Nature of Hearing
The Physical Nature of Hearing 1-73
12.
Pierce, John R.: The Science of Musical Sound, Scientific American Library, New York,
N.Y., 1983.
13.
Gabrielsson, A., and H. Siogren: “Perceived Sound Quality of Sound-Reproducing Sys-
tems,” J. Aoust. Soc. Am., vol. 65, pp. 1019–1033, 1979.
14.
Toole, F. E.: “Subjective Measurements of Loudspeaker Sound Quality and Listener Perfor-
mance,” J. Audio Eng. Soc., vol. 33, pp. 2–32, 1985.
15.
Gabrielsson, A., and B. Lindstrom: “Perceived Sound Quality of High-Fidelity Loudspeak-
ers.” J. Audio Eng. Soc., vol. 33, pp. 33–53, 1985.
16.
Shaw, E. A. G.: “External Ear Response and Sound Localization,” in R. W. Gatehouse
(ed.), Localization of Sound: Theory and Applications, Amphora Press, Groton, Conn.,
1982.
17.
Blauert, J: Spatial Hearing, translation by J. S. Allen, M.I.T., Cambridge. Mass., 1983.
18.
Bloom, P. J.: “Creating Source Elevation Illusions by Spectral Manipulations,” J. Audio
Eng. Soc., vol. 25, pp. 560–565, 1977.
19.
Toole, F. E.: “Loudspeaker Measurements and Their Relationship to Listener Preferences,”
J. Audio Eng. Soc., vol. 34, part 1, pp. 227–235, part 2, pp. 323–348, 1986.
20.
Plomp, R.: Aspects of Tone Sensation—A Psychophysical Study,” Academic, New York,
N.Y., 1976.
21.
Buchlein, R.: “The Audibility of Frequency Response Irregularities” (1962), reprinted in
English translation in J. Audio Eng. Soc., vol. 29, pp. 126–131, 1981.
22.
Stevens, W. R.: “Loudspeakers—Cabinet Effects,” Hi-Fi News Record Rev., vol. 21, pp.
87–93, 1976.
23.
Fryer, P.: “Loudspeaker Distortions—Can We Rear Them?,” Hi-Fi News Record Rev., vol.
22, pp. 51–56, 1977.
24.
Batteau, D. W.: “The Role of the Pinna in Human Localization,” Proc. R. Soc. London,
B168, pp. 158–180, 1967.
25.
Rasch, R. A., and R. Plomp: “The Listener and the Acoustic Environment,” in D. Deutsch
(ed.), The Psychology of Music, Academic, New York, N.Y., 1982.
26.
Shaw, E. A. G., and R. Teranishi: “Sound Pressure Generated in an External-Ear Replica
and Real Human Ears by a Nearby Sound Source,” J. Acoust. Soc. Am., vol. 44, pp. 240–
249, 1968.
27.
Shaw, E. A. G.: “Transformation of Sound Pressure Level from the Free Field to the Ear-
drum in the Horizontal Plane,” J. Acoust. Soc. Am., vol. 56, pp. 1848–1861, 1974.
28.
Shaw, E. A. G., and M. M. Vaillancourt: “Transformation of Sound-Pressure Level from
the Free Field to the Eardrum Presented in Numerical Form,” J. Acoust. Soc. Am., vol. 78,
pp. 1120–1123, 1985.
29.
Kuhn, G. F.: “Model for the Interaural Time Differences in the Azimuthal Plane,” J. Acoust.
Soc. Am., vol. 62, pp. 157–167, 1977.
Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com)
Copyright © 2004 The McGraw-Hill Companies. All rights reserved.
Any use is subject to the Terms of Use as given at the website.
The Physical Nature of Hearing
1-74 Principles of Sound and Hearing
30.
Shaw, E. A. G.: “Aural Reception,” in A. Lara Saenz and R. W. B. Stevens (eds.), Noise
Pollution, Wiley, New York, N.Y., 1986.
31.
Toole, F. E., and B. McA. Sayers: “Lateralization Judgments and the Nature of Binaural
Acoustic Images,” J. Acoust. Soc. Am., vol. 37, pp. 319–324, 1965.
32.
Blauert, J., and W. Lindemann: “Auditory Spaciousness: Some Further Psychoacoustic
Studies,” J. Acoust. Soc. Am., vol. 80, 533–542, 1986.
33.
Kurozumi, K., and K. Ohgushi: “The Relationship between the Cross-Correlation Coeffi-
cient of Two-Channel Acoustic Signals and Sound Image Quality,” J. Acoust. Soc. Am., vol.
74, pp. 1726–1733, 1983.
34.
Bose, A. G.: “On the Design, Measurement and Evaluation of Loudspeakers,” presented at
the 35th convention of the Audio Engineering Society, preprint 622, 1962.
35.
Kuhl, W., and R. Plantz: “The Significance of the Diffuse Sound Radiated from Loud-
speakers for the Subjective Hearing Event,” Acustica, vol. 40, pp. 182–190, 1978.
36.
Voelker, E. J.: “Control Rooms for Music Monitoring,” J. Audio Eng. Soc., vol. 33, pp.
452–462, 1985.
37.
Haas, H.: “The Influence of a Single Echo on the Audibility of Speech,” Acustica, vol. I,
pp. 49–58, 1951; English translation reprinted in J. Audio Eng. Soc., vol. 20, pp. 146–159,
1972.
38.
Rakerd, B., and W. M. Hartmann: “Localization of Sound in Rooms, II—The Effects of a
Single Reflecting Surface,” J. Acoust. Soc. Am., vol. 78, pp. 524–533, 1985.
39.
Zurek, P. M.: “Measurements of Binaural Echo Suppression,” J. Acoust. Soc. Am., vol. 66,
pp. 1750–1757, 1979.
40.
Hall, Donald: Musical Acoustics—An Introduction, Wadsworth, Belmont, Calif., 1980.
41.
Durlach, N. I., and H. S. Colburn: “Binaural Phenemena,” in Handbook of Perception, E.
C. Carterette and M. P. Friedman (eds.), vol. 4, Academic, New York, N.Y., 1978.
Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com)
Copyright © 2004 The McGraw-Hill Companies. All rights reserved.
Any use is subject to the Terms of Use as given at the website.
The Physical Nature of Hearing
2-1
Section
2
The Audio Spectrum
Intensity, duration, and repetition in time are perceptually important characteristics of audio sig-
nals. Consider, for example, speech, music, and natural and electronically generated sounds
when heard by a listener. Most audio signals do have rather complicated waveforms in time,
however, and these are difficult to analyze visually. The spectrum of the signal offers an alterna-
tive representation which displays the strengths of the signal's oscillating parts arranged in order
of increasing oscillation. The spectrum also contains information about relative displacements or
time shifts of these oscillating parts. In simple terms, the spectrum is a decomposition of the sig-
nal into several different oscillating components that later can be reassembled to re-create the
original signal. All the information in the signal is contained in its spectrum, but the spectrum is
a different way of representing the signal.
Frequency—the number of oscillations per second, or hertz—is a significant concept associ-
ated with the spectrum. Time is no longer explicitly used but is implicitly contained in the notion
of frequency. A time interval, called a period and equal to the time taken for one full oscillation,
is associated with every frequency, however. The period is simply the reciprocal of frequency
(number of oscillations per second). A signal's overall repetitive nature as well as any hidden
periodicities are revealed in its spectrum. The relative importance of the individual frequency
components is also clear even though this may not be obvious from inspection of the signal itself
In the spectrum, frequency is the independent variable, or domain, rather than time.
These two different ways of viewing the signal, in time or in frequency, are called the time
domain and the frequency domain, respectively. The two domains are interrelated by a mathe-
matical operation known as a transformation, which either resolves the frequency components
from a time-domain signal or reconstructs the signal from its frequency components. Insight into
audio signal properties is gained by careful study of the signal in each domain. Furthermore, if
the signal is passed through a system, the effects of that system on the signal also will be
observed in both domains. The spectrum of the output signal can reveal important signal modifi-
cations such as, for example, which frequency components are reinforced or reduced in strength,
which are delayed, or what is added, missing, or redistributed. Comparison of spectra can be used
to identify and measure signal corruption or signal distortion. Thus, the spectrum plays a signifi-
cant role in both signal analysis and signal processing.
With more advanced mathematical techniques it is possible to combine the two domains and
form a joint-domain representation of the signal. This representation forms the basis for what is
Downloaded from Digital Engineering Library @ McGraw-Hill (www.digitalengineeringlibrary.com)
Copyright © 2004 The McGraw-Hill Companies. All rights reserved.
Any use is subject to the Terms of Use as given at the website.
Source: Standard Handbook of Audio and Radio Engineering