Pitch Coherence as a Measure of Apparent Distance in Performance Spaces and Muddiness in Sound Recordings

Download 62.42 Kb.
Size62.42 Kb.

Pitch Coherence as a Measure of Apparent Distance in Performance Spaces and Muddiness in Sound Recordings

D Griesinger Chief Scientist, Harman Specialty Group, Bedford, MA, USA www.theworld.com/~griesngr

It has long been known that muddiness in a recording is related to the direct to reverberant ratio of the sound. We propose here that muddiness and the perceived distance of a sound source are closely related, and can be measured with the same methods. Much research has focused on the Interaural Cross Correlation (IACC) as a measure for this property. However IACC depends on sound differences between the two ears – and thus is only affected by laterally moving sound energy, or energy that comes from the side of the listener. In nearly all real-world acoustic spaces sound that comes from medial directions - in front, above, below and from the rear - has at least twice the total energy than sound from the side. It is easily demonstrated that both muddiness and sonic distance can be perceived in a monaural recording, or by a listener with one ear. Thus IACC cannot be the only mechanism for this perception – and is clearly inadequate when the reflected energy is medial. It is clear that whatever mechanism we use for this perception is robust – we do it so easily with sounds such as speech – but it is not clear how we do it. An important clue comes from the fact that it is quite difficult to perceive sonic distance from sound sources with no perceived pitch.
This paper proposes two mechanisms by which “sonic distance” (or muddiness in sound recordings) can be perceived from medial sound energy. Both mechanisms depend on the physical effects of reverberation on signals that have a perceived pitch. One of these mechanisms involves the ability to perceive the amplitude fluctuations that occur due to interference when the source signal and the reverberation are not in steady state. The other mechanism – and the more interesting for this paper – involves the effect of reverberation on the phase coherence of overtones in the speech formant range. The perception on which the phase coherence method is based is not new. It is sometimes referred to as “false bass”, and may be related to the property Zwicker calls “roughness”. We are suggesting that this perception has an additional (and probably important) use – the perception of distance. Phase coherence allows us both to perceive the direct/reverberant ratio of many common sounds, and to build a computer model that can objectively measure the direct/reverberant ratio from a recording.
Humans can perceive the apparent distance of a sound source with surprising accuracy, even when the stimulus is presented equally in both ears. This ability is particularly robust for signals that have the properties of speech – namely a syllabic stream of sounds with a perceivable pitch. Because we can detect distance easily, it is likely that distance perception is important to our over-all perception of sound quality. We will discuss the implications of sonic distance on acoustic quality in this paper. If humans prefer certain sounds at certain perceived distances – and it seems likely that we do – then the amount of direct sound at the listening position becomes quite important. Medial reflections – which in general are stronger than lateral reflections – play an important part in distance perception. We need to know how they are related to sound quality.
The author intends to have a more complete version of this paper, and a series of sonic examples, on his web-page soon. The reader is encouraged to listen to these examples. The effects are usually quite robust – so the method of playback should not be critical. A link to the author’s page is provided here: www.theworld.com/~griesngr.

Figure 1: The result of passing a 1 second tone burst at 440Hz through a reverberation device set for a RT = 2.0 seconds. The direct to reverberant ratio is 1:1. The top and bottom trace show two different paths through the simulated hall. The reverberation is stationary with time. Note the amplitude (and the phase) of the output fluctuates rapidly until a steady-state is reached, approximately 0.3 seconds into the tone. Even after this time some variation in level can be seen. When the tone ceases the level fluctuates again. If we amplify the decay we will see that these fluctuations continue until the sound ceases to be audible. www.theworld.com/~griesngr/IOA/440Hz_silence_reverb.mp3

These fluctuations are audible. They cause the sound to be perceived as reverberant, even when only one frequency is present. The rate at which the initial fluctuations decrease (as steady state is approached) gives a cue to the reverberation time, and thus the size, of the space. The depth of the fluctuations during the build-up of reverberation gives a cue to the direct to reverberant ratio. If the reverberation is primarily lateral, and the result is experienced binaurally – as in figure 1 – the differences in the fluctuations between the two ears enhances the audibility of the reverberation. The fluctuations in the pressure become easily audible, and the reverberation separates from the direct sound. The direct to reverberant ratio is easy to perceive. Binaural fluctuations are discussed at length in reference 1. When the signal is heard with a single ear, or when the identical signal is presented to both ears, the effect is still audible. But the direct to reverberant ratio is more difficult to perceive.
When the tone contains many overtones it becomes easier to detect the reverberation, but more difficult to perceive the fluctuations. Reverberation affects each harmonic differently, such that the phase and amplitude of each harmonic fluctuate at different rates. As a result the tone seems to be constant in amplitude – but there is a strong perception that the tone is “reverberant.” The direct to reverberation ratio is perceivable, even when the stimulus is presented monaurally.

1.1 The effects of reverberation on tones that are not constant in pitch
When the stimulus is not constant in pitch the reverberation is never in steady-state, and the fluctuations we observe in figure 1 continue though the length of the sound. This behavior is typical of speech sounds, and of almost all musical tones. When a tone that is varying in pitch – such as a musical tone that contains vibrato – the period of the observed fluctuations depends on the rate of change of the pitch, and the frequency of the harmonic. For example, a violin tone at 440Hz with a typical vibrato might have a fluctuation period of about 0.2 seconds at the fundamental, 0.1 seconds at the second harmonic, 0.066 seconds a the third harmonic, and so on.

The fact that the harmonics fluctuate at different rates gives the perception that the tone is completely steady – but in the presence of reverberation. The brain appears to detect and analyze the fluctuations, but the conscious perception is that the tone is steady.
The perception of direct/reverberant ratio through fluctuations is possible and useful when the incoming tones have fundamentals above 250Hz, and when the tones are relatively steady in amplitude. Held notes from a soprano soloist are a good example. But many common signals are not constant in amplitude.
1.2 The effects of reverberation on tones that vary rapidly with time
It is particularly easy to perceive the direct to reverberant ratio – and thus the apparent distance – when the sound source is human speech. The author has spent some considerable time trying to construct a computer model that can extract this information from a speech signal using only the level fluctuations caused by the reverberation. This is relatively easy to do when the reverberation is lateral, and a binaural pick-up is available. This case will be discussed in the next section. I have come to believe that a measure could be constructed by looking at the differences between the level fluctuations in different harmonics of the signal, and this may be one of the ways we perceive sonic distance. This model awaits future work. However, there is a far simpler method for sound sources with fundamental frequencies below 250Hz.

After much searching I have become convinced that this mechanism is based on our ability to detect the fundamental pitch of a sound from information contained in the upper harmonics – particularly those in the major speech formants. This mechanism will be discussed in detail in section 3. It is chiefly useful when the sound source has a fundamental frequency below 250Hz, thus yielding a plethora of overtones in the formant range. Many musical sounds have this property.

The Interaural Cross Correlation – or IACC – is well known to be influenced by the direct to reverberant ratio. In reverberant environments the IACC tends to be low, since reflected energy arrives from many directions. IACC is not an accurate measure of the direct to reverberant ratio when the majority of the reflected energy comes from the medial plane; that is from above, behind, or in front of the listener. All these directions produce equal pressure in both ears, and thus tend to raise the IACC, not lower it. However, IACC as a measure is still quite useful in many rooms. Beranek, Griesinger, and others have proposed that the quantity 1/(1-IACC) can be used as a measure of direct to reverberant ratio for frequencies above 300Hz, at least when the reverberation is equal in all directions.
With these limitations 1/(1-IACC) may be a primary cue for perceived distance of a sound source for two-eared listeners. The exact relationship between 1/(1-IACC) and distance is complicated, as it depends on the properties of the source. A source that is continuous – such as a long held note on an oboe or violin – will maximally excite the reverberant space. During the note the direct to reverberant ratio is constant – but the IACC may fluctuate depending on the purity of the source pitch. When the source is syllabic – such as speech – the sound is not continuous, but consists of a series of short sounds (phones or notes) separated by pitch changes or periods of silence. In this case the IACC is a strong function of time. During the beginnings of the sounds the IACC can be quite close to one, as there has been insufficient time for the reflected energy to reach the listener. Only later does the IACC reflect the direct to reverberant ratio – and this is also a function of time, as the reverberant level requires time to build to its maximum value.

Figure 2: A binaural speech signal recorded in a very live opera house at a distance of about 15 meters. Note that the levels (and the phase) of the signals at the two ears fluctuate both with high amplitude, and differently in the two ears. In spite of this obvious fact, the direct sound is distinctly audible, and the azimuth of the sound source can be easily perceived. Our ability to do this depends on the fact that at there is very little reverberation during the onset of each speech sound.

We can analyze this signal in figure 2 by first separating it into 1/3 octave bands, and then calculating the IACC every 10ms.

Figure 3: 1/(1-IACC) in dB plotted by third octave band and time for the speech signal in figure 2. The IACC is calculated from multiple overlapping 10ms windows. 1/(1-IACC) is a proxy for direct to reverberant ratio. Notice that the direct to reverberant ratio at the onset of each speech sound is quite high; +10dB or more. This ratio is sufficient to determine the azimuth of the sound. The direct/reverb ratio is below +5dB for frequencies below 640Hz.

We can plot the time delay of the peak of the IACC (which is a measure of azimuth) as a histogram for the peaks in figure 3.

Figure 4: A histogram of the delay of the peak for the IACC values shown in figure 3. Negative values of delay result when the sound is to the right. The azimuth of the source – which was moving slowly from right to left – is easily detected – and can be easily perceived by listening to the recording.

The figures presented here demonstrate that binaural cross-correlation can be used to detect azimuth, and to find the direct to reverberant ratio for lateral sound energy during the initial time delay gap. Unfortunately only the lateral component of the reflected sound affects the IACC. When the medial energy is high, IACC will be high also, and the direct sound will appear to be stronger than it actually is. Such sounds are perceived as distant, in spite of the high IACC. It is important to remember that medial reflected energy is almost always stronger than lateral energy by at least a factor of two – and IACC does not measure medial energy.

Figure 5: Left side: the word “five” filtered with a 2kHz 1/3 octave filter. Note the clear peaks in the waveform at the fundamental frequency of 125Hz. Right side: the same signal convolved with a 20ms burst of white noise, simulating a diffuse reflection or the reverberation in a small room. Note the amplitude modulation at the fundamental frequency is almost completely removed. The two links below will allow the reader to hear these two examples, which are identical in loudness and spectrum. Yet the second one sounds much more distant.

www.theworld.com/~griesngr/IOA/ten.mp3 www.theworld.com/~griesngr/IOA/tenx.mp3
The left side of figure 5 shows how the fundamental pitch of a spoken word is preserved by the phase coherence of the upper harmonics. The band-pass filtered waveform appears to be amplitude modulated at the fundamental frequency. In the presence of reflections the phase coherence of the harmonics is lost, and the signal becomes noise-like. The basilar membrane responds to negative pressure only, and thus can be considered a detector for amplitude modulation. When we hear the left hand waveform the fundamental frequency is clearly perceived, just as if it were present in the stimulus. The fundamental is not as strongly perceived in the right hand waveform, although traces of it remain. It is still possible to perceive the pitch of the fundamental, particularly if it is possible to hear several critical bands. However there is a clear difference in the ease with which this detection is accomplished.

Figure 6: The plus/minus pitch detector.

The pitch of the fundamental can be detected with the simple circuit shown in figure 6. Signals are first filtered into third octave bands, then rectified. The low-pass rectified signals are then fed to a delay line, and a ratio is found between the non-delayed signals and the delayed signals. The ratio is high when the delay corresponds to the fundamental pitch period.
This circuit is not intended to be a neurological model. The circuit is basically an analog representation of a possible neural architecture. The filtering applied before the ratio is not shown – and is critical to the way the model works. The Matlab code used for the model is available from the author’s web page for those who would like to experiment with this model, but the author intends to make a more neurologically based model in the near future. The model in its current form is still quite useful. The plus/minus ratio tracks the direct/reverberant ratio reasonably well, and some interesting conclusions can be drawn from real signals.

Figure 7: Left side: the syllables “one”, “two” in the 2500Hz 1/3 octave band, as seen by the plus/minus pitch detector. Note the fundamental is detected with a S/N ratio of over 15dB. Right side: the same signal after convolution with 20ms of white noise. Note that although the pitch of the second syllable has survived to some extent, the pitch of the first syllable is not easily detectable. (Note the vertical scales of the two graphs are different)

Figure 8: Left side: The syllables “one” “two” with direct sound followed by a diffuse reflection 30ms later. This data is from a single 1/3 octave band at 2500Hz. Note the clear pitch detection in the first 30ms, followed by poor pitch perception. The brief segment of clear sound can be easily heard. Right side: A short segment from an opera, recorded in the old Bolshoi at the back of the first ring. 2500Hz 1/3 octave band. Note the clear detection of fundamental pitch, particularly for the last two syllables. These consisted of a gliding slide from about 200Hz to 250Hz.

The sound in the old Bolshoi opera house is dominated by direct sound, even in the back of the rings. The sound is clear, dry, and the drama is immensely powerful. The Russian singers have the vocal power to fill the hall with direct sound. Dramatically the hall seems intimate, with the singers close to the audience. There is no problem with balance, as the low RT favors the singers over the orchestra. Figure 8 clearly shows the dominance of direct sound, and the ease with which the pitch of the singers can be heard from the harmonics in the “singer’s formant”.

Surprisingly perhaps, the orchestra in the old Bolshoi is perceived as reverberant and enveloping. Envelopment is created by spreading the musicians along an unusually wide orchestra pit, and reverberation is simulated by the method of playing.

The author made a similar recording in the new Bolshoi next door, and was unable to detect the pitch of any syllable using this detector. The direct/reverberant ratio was always below +4dB. The new Bolshoi hall is widely regarded as loud and muddy, and this observation is supported by the plus/minus pitch detector. The hall is difficult dramatically. The singers seem far away, and the balance is not good. www.theworld.com/~griesngr/IOA/bolshn.mp3
3.1 Measuring Opera Houses through Pitch Coherence
Pitch coherence can be used to measure the properties of halls and opera houses using natural signals – or from published echograms. For example, here are echograms made by Beranek and Hidaka in the Theatro Alla Scalla in Milan:

Two Kilohertz

500 Hz

Derived impulse response, combined and extended

www.theworld.com/~griesngr/IOA/ten_scal.mp3 www.theworld.com/~griesngr/IOA/ten_nnt.mp3
Figure 9: Left side: The pitch coherence of speech in the Theatro alla Scalla in Milan, as determined by convolving a speech signal with the echograms published by Beranek and Hidaka. The method used to extract the impulse response is described in reference 2. This graph is from the 2000Hz 1/3 octave band, but other bands above 1000Hz are similar. Note the clear extraction of the fundamental pitch from singer’s formant. Scalla has a relatively low reverberation time above 1000Hz, and has good reputation for opera. Right side: The same signal convolved with echograms from the NNT opera house in Tokyo. Note that the pitch is difficult to extract. The NNT hall was designed to maximize the early reflections from the singers into the audience, in the hope that this would improve the singer/orchestra balance. The RT with the audience present is considerably higher above 1kHz than in La Scalla. The disadvantage of this design is that the direct sound is buried in a wash of early (and late) reflected energy, and that some of the possible balance improvement offered by the early reflections is mitigated by the increase in orchestral power caused by the higher RT. Medial reflection level is high. The singers sound far away from the audience.
Some years ago the author believed that the most important acoustic aspect for a traditional theater or opera was the intelligibility of the actors. Several experiences challenged this belief. We have installed electroacoustic systems in many opera houses. These systems allow real-time adjustment of early and late reflected energy, both for the singers and the orchestra, as well as the frequency balance of the reverberation. We adjust these systems with the aid of the principle conductor – both in rehearsal and in performance. In the Berlin Staatsoper, the Amsterdam Musiektheater, and the Copenhagen Royal Theater the primary criterion for setting the reverberation level above 1000Hz turned out to be not the intelligibility of the singers, but their apparent distance from the audience. Typically we increase the reverberant level in ½ dB steps, while listening to an aria with the music director. In general, the sound slowly becomes more reverberant until a critical point is reached. Increasing the level beyond this point – just by ½ dB – dramatically increases the perceived distance to the singer. Without enhancement the sound in these halls is dominated by the direct sound, so in the natural state the singers are perceived as close to the listener.
For the directors and conductors in these houses the sonic distance of the performers was more important than the increased reverberation that was possible with the system. The compromise we worked out was to increase the reverberation level and the reverberation time below 1000Hz, leaving the singer’s formant region essentially untouched. The end result is very effective dramatically. The orchestra is perceived as blended and enveloping, and the singers are perceived as powerful and close to the listeners.
This type of experiment is only possible when an enhancement system is installed in a dry hall. Such an installation allows directors and conductors a unique chance to hear the effect of reverberation on singers, actors, and orchestras. The result – that the system should be essentially inaudible above 1000Hz – was surprising to all. Barenboim in Berlin wanted the sound of Bayreuth,

Heinchen in Amsterdam wanted the sound of the Semper Oper Dresden. In both Bayreuth and Dresden the RT above 1000Hz is 1.5 seconds or more. Yet when the opportunity exists to hear the effect such a long RT makes on the distance between the audience and the singers, the long reverberation time is clearly undesirable. (The statement that in Bayreuth the reverberation time above 1000Hz is more than 1.5 seconds has been challenged by several friends who are frequent visitors to this house – the perceived reverberation is much drier.)

There is another reason that a long reverberation time is problematic in an opera house. The pit orchestra couples to the house both through direct sound and through reverberation. It is essentially an omnidirectional source, although the direct sound is often blocked for seats in the stalls. The sound of the orchestra travels up and into the upper reaches of the hall, where there is ample opportunity to scatter and come back down as reverberation. The singers are not omnidirectional – they project their upper harmonics directly into the most absorbent part of the hall – the audience. As the reverberation time increases the sound power from the orchestra increases, and the sound power from the singers does not. With a few simple assumptions one can calculate that the singer/orchestra balance changes about 1.5dB as the reverberation time increases from 1 second to 1.5 seconds. This change in balance is audible, and highly undesirable.
Then why do conductors and critics nearly always prefer more reverberant halls? I believe the primary reason is that sonic distance is one of the most difficult perceptions to remember. The primary mechanism for distance perception is visual. When the ears and the eyes disagree, the eyes win. We adapt to the sonic situation we encounter – our ears make the best of it. Stopped reverberation – and to some extent the warmth and blend that reverberation gives to an orchestra - is particularly memorable. It is only through prompt A/B comparisons that the disadvantages of reverberation can be made clear. In my experience the lure of a large, reverberant opera house is irresistible to opera managers and conductors (and is likely to remain so).
The second type of experience was with drama – Chekov in this case. I was asked by the managers of the Royal Theater in Copenhagen to use the acoustic enhancement already installed in the “new stage” theater to improve intelligibility. The new stage was a small 1000 seat drama theater adjacent to the Royal Theater. We had previously installed an acoustic enhancement system, using about 60 high quality loudspeakers spread around the side walls.
The system was designed to increase the late reverberant energy at low frequencies for opera performance, much the same as the larger system in the Royal Theater. It was unlikely that such a system could improve intelligibility, as the microphone positions were beyond the critical distance from the actors. We replaced the microphones with the line-array mikes from Microtech Gefiel, which have an a 10 degree beam-width and an acoustic gain of over 10dB. We also added a sophisticated gate to the electronics. The gate attenuated the reverberation at the end of each speech sound. With all this in place the system worked – much better than I had expected. The loudness and the intelligibility of the actors improved throughout the house.
We arranged a demonstration during a live performance, with a full audience. Every 10 minutes we turned the system on or off. Five of the major drama directors who work in Copenhagen participated by sitting in the audience and watching the time. At the end the result was unanimous. The system worked as we hoped – loudness and intelligibility increased. But the apparent distance between the audience and the actors also increased, and this was undesirable. The sonic distance increased because the loudspeakers augmenting the direct sound were all around the audience, creating the equivalent of a diffuse reflected field. The time delays of the speakers were carefully adjusted to minimize the time delay offsets – but the speakers destroyed the phase coherence of the upper harmonics of the speech signals. It was judged better that the audience did not hear every word than to have the actors sonically more distant. Actor training was needed, not electronics. This result is not new – good drama theaters are dry, and good actors project. Is opera drama? Wagner affirmed this in a rather lengthy book on the subject. I doubt he would think that surtitles substitute for high sonic distance and poor singer/orchestra balance. When he heard the sound in Beyreuth he covered the orchestra. I suspect he would have preferred a short sonic distance for the singers if this could be achieved while maintaining the orchestral richness.

The author has binaural recordings made during these tests, and will try to make them available on this web-page. The difference in apparent distance is clearly audible.

When a sound source is not pitched – filtered noise or broad-band percussion are examples – it is not possible to determine sonic distance by pitch coherence. The type of fluctuations in level seen in steady tones are not produced by reverberation. Thus we would expect that the apparent distance of such a source would be quite difficult to perceive in a monaural recording. This is the case. When signals are presented binaurally the direct/reverberant ratio is perceivable, as we would expect. But when only one channel is available, and the spectrum of the reflections matches the spectrum of the source, it is quite difficult to perceive distance.
The same difficulty occurs with pitched sounds that have incoherent harmonics, such as the sound of a chorus, or a violin section. A simple experiment involves taking 10 or so violin tones from a synthesizer (or Matlab), and giving each a different frequency over a range of a few percent. If the random frequency distribution is Gaussian, the resulting tone has both the harmonic phase structure and the level fluctuations normally created by reverberation. The direct/reverberant ratio becomes impossible to detect from a single channel of information. If the source has a wide physical extent – such as a violin section or a chorus – and the reverberation comes from the same direction as the source, distance becomes impossible to judge even with a binaural signal.

Orchestral music consists predominantly of sounds from instrumental sections, and these have incoherent harmonics. Perhaps this is a reason that direct sound is often considered unnecessary in concert halls. However even in orchestral music the direct sound is audible. Remember that the direct/reverberant ratio depends on the length of the note. It takes at least 500ms for the reverberation to build up to its final level when a note is held. Short notes have a higher percentage of direct sound, and can give the listener the impression the soloist is close. Direct sound is also almost always audible during the early time delay gap, particularly on orchestral soloists such as woodwinds. The fact that we can detect the direct sound in the onsets of these signals accounts for our ability to localize them – and this ability is important to preference.

The author believes that it is desirable to hear a soloist – such as the violin soloist in a concerto – with the same immediacy that we would expect in a reasonably dry opera house. In my experience this is often the case in good halls. A violin soloist in not an omnidirectional source, and neither is a solo singer. In Boston Symphony Hall the direct sound from the top of a solo violin dominates the reverberation above 1000Hz at least for the first 2/3s of the seats on the floor. The author is familiar with Severance Hall in Cleveland, which is certainly dry enough for excellent sound transmission of a soloist. Avery Fisher Hall in New York, and the Kennedy Center in Washington DC, are a bit more reverberant. We will avoid mentioning halls where this is not the case. There are many. Examples of sounds from various halls will hopefully appear on the author’s web page in the near future.
Academic discussions of recording technique nearly always involve some argument about a “main microphone” which supposedly captures the sound of the orchestra. However in nearly all halls except the largest and most expensive, the “main microphone” is placed well beyond the reverberation radius from the majority of the instruments. For all these instruments the reverberant energy exceeds the direct sound energy – and the sound will be muddy. In practice engineers augment the direct sound by adding a large number of so called “accent” microphones. These are adjusted by ear until the direct sound (supplied by the accents) exceeds the reverberation picked up by the main microphone. When this process is compete, the main microphone can usually be turned off with very little change to the sound image.
The methods of measuring the direct to reverberant ratio presented above can be applied to a recording after it has been made, and this might be quite useful to a student (or instructor) who is unsure if a recording is muddy or not.
The perceived distance of a sound source in a reflective or partially reflective space can be quantified in part by our ability to extract fundamental pitch frequencies from overtones in the frequency range of the vocal formants. The ease with which we an do this depends on the direct to reverberant ratio, and the early time delay gap. When the direct to reverberant ratio above 1000Hz drops below about 2dB a sound is perceived as distant. For frequencies above 1000Hz a difference in reverberant level of ½ dB can be easily audible at the threshold.
In the author’s experience dramatic intensity suffers when the sound from an actor or a singer is perceived as distant. The ideal compromise for opera seems to be to keep the reverberant level above 1000Hz low enough that the singers retain their dramatic presence, while increasing the reverberant level and the reverberation time below this frequency. This effect can be achieved in a dry hall with an electronic enhancement. The effect may have been achieved passively in a few traditional halls around the world.

  1. D. Griesinger – “The psychoacoustics of Apparent Source Width, Spaciousness and Envelopment in Performance Spaces Acta Acustica Vol. 83 (1997) 721-731 This paper is also available here: www.theworld.com/~griesngr/SPAC7A.DOC

  2. D. Griesinger – “Subjective aspects of room acoustics” - slides the RADIS conference in Japan - www.theworld.com/~griesngr/ICA2.ppt

  3. L.L. Beranek, T. Hidaka, S. Masuda “Acoustical design of the opera house of the New National Theater, Tokyo, Japan” JASA 107 1, Jan 2000 355-367

Download 62.42 Kb.

Share with your friends:

The database is protected by copyright ©ininet.org 2020
send message

    Main page