Sound 1 Study Guide



Download 57.94 Kb.
Date30.04.2017
Size57.94 Kb.
#16915
Sound 1 Study Guide
1)Murray Schaffer “The Tuning of the World” 

-bird song

-debatable if the bird “sings” or “converses”

-no sound has attached itself so affectionately to the human imagination as bird vocalization

-bird vocalizations are of all types

-birds can dominate at times, a soundscape because of their numbers

-the vocalizations of birds have been studied in musical terms

-but it cannot (with few exceptions) be notated in musical terms

-a more precise method of notation is that of the sound spectrograph and ornithologists are now using this method

-birds, like poems, should not mean but be


Goes on the give examples of different areas of the world and their specific soundsape pg. 120-121

-ornithologists have done a lot of work on another subject of interest to soundcape researchers by classifying the types and functions of bird-song


Distinguished as follows: pleasure calls, distress calls, territorial/defence calls, alarm call, flight calls, flock calls, next calls, feeling calls.

-equivalent many can be found in human sound making, gives examples pg 120


The definition of acoustic means is much more ancient than the establishment of property and fences

-private property becomes threatened in the modern world


Sounds of insects, sounds of water creatures, sounds of animals

-the carnivores produce the greatest range of individual sounds among animals, many of these sounds (roar of the lion, laughing of the hyenas etc.) have such striking qualities that they impress themselves instantaneously on the human imagination.


-Gorilla is the only primate to have discovered a nonvocal sound mechanism: it drums on its chest with its fist, producing a loud, hollow sound. Done both when making vocal sounds and on its own. The gorilla has discovered the property of resonance, independent of the natural mechanism of the voice box. It seems forever on the verge of discovering the musical instrument without being able to make the transition from personal to artificial sound. So far as we know only man has done this!!
Onomatopoeia mirrors the soundscape.

-in onomatopoeia vocabulary, man unites himself with the soundscape about him, echoing back its elements.

2) Soundwalking

By: Hildegard Westerkamp




  • A soundwalk is any excursion whose main purpose is listening to the environment. It is exposing our ears to every sound around us no matter where we are. Where ever, when ever (shakira shakira!) during the soundwalk we will give priority to our ears

  • We tend to block and “ignore” sounds, they are there, but we don´t actually hear them anymore

  • It can be done in groups, alone, partners, but its focus is to rediscover and reactivate our sense of hearing.

  • When doing a soundwalk, describe everything you hear ( nature sounds, mechanical sounds, rhythms, regular beats, high and low pitches, intermittent or discrete sounds, what are the sources if these sounds, relationships in between form and sounds, WHAT ELSE DO YOU HEAR?)

  • When attentive listening becomes a daily practice, requesting sound quality becomes a natural activity

  • Sound can be used for orientation –example:

Ship captains use to determine their position in relation to the shoreline by echo whistling. They'd give a short whistle and estimate the distance from the shoreline by the returning echo. If the echo came back from both sides at the same time they'd know that they were in the middle of the channel." (Gordon Odlum, reminiscence, Vancouver, 1973)

  • Sounds can make dialogues-example:

If you are a hunter you may catch your prey easier by imitating the animal's call. The Inuit still do it in this fashion. "A hunter's imitation of a seal is sometimes good enough to fool the hunted seal. Some men can parody anything: bear, iceberg, yes, even wind." (ibid. p. 27) Go out and try to imitate all sorts of sounds.

  • The wind produces specific and special sounds according to the object that they hit-example:

Wind whistling through electric wires. Wind rustling through grass. Wind trapped between buildings. Wind howling, mourning, rustling, wailing, whining, screaming. . . . And as we hear these voices they may be mocking us, they may sound frightening, or they may energize us, each time depending on the situation we hear them in.

3) Stanley Alten- 'Part one: principles

             Ears

1.      The senses work together as a “democracy” – not as alternatives

2.      Sound is omnidirectional: it’s everywhere, it’s also layered

3.      Sound is attention-demanding: it requires active listening

4.      Sound provides cognitive (knowledge, logic, memory etc.) and affective (feeling, mood and emotion) information

5.      The ear is made up of three parts: the outer ear, the middle ear and the inner ear

6.      For a good diagram of how the ear works, check page 5 of the coursepack (it can’t be made any more clear than that...

Hearing Loss

1.      We live in a society which is full of noise (lawnmowers, radios, RVs, parties etc.)

2.      Hearing loss can be caused by prolonged exposure to loud sound, causing decreased sensitivity

3.      Temprary threshold shift (TTS) is the temporary feeling of having cotton-stuffed ears after listening to loud sounds

4.      Tinnitus occurs after exposure to loud sounds which causes a buzzing in the ears

5.      To prevent hearing loss, you can use foam or silicone ear plugs; also, remember to visit your Ear, Nose and Throat Specialist regularly

6.      Check out the diagrams of loudness on pages 8 and 9 in the coursepack

 

The Educated Ear



1.      It’s important to pay attention to sounds around you

2.      Educated ears means being able to listen carefully to traits like style and technical quality as well as evaluating the sound’s content and characteristics

 

Physics and Psychophysics of sound



1.      Sound wave: vibrational disturbance involving the motion of molecules which transmit energy from one place to another

2.      Sound waves are caused when a body vibrates and moves the molecules nearest to it; this initial motion sets off a chain reaction which creates pressure waves through the air: these waves are perceived as sound when they go through the ear and reach the brain.

3.      Because of elasticity, the molecules which were displaced by the vibration tend to bounce back to their original location

4.      Sound waves’ traits are frequency, amplitude, velocity, wavelength and phase

5.      Sound isn’t just physical, it also affects us psychologically

6.      Frequency (or pitch): the number of times a sound wave vibrates

7.      Human ear range: 10 octaves between 20Hz and 20 000 Hz

8.      Decibel (dB): a dimensional unit used to compare two quantities in relation to acoustic energy. Humans can hear from 0dB-SPL to 120dB-SPL.

9.      Impedance: property of a circuit, or an element, which restricts the flow of alternative current (AC). Impedance is measured in ohms, a unit of resitstance to current flow

10.  Humans hear mid-range frequencies best

11.  Masking: a perceptual response we have. It’s the covering of a weaker sound with a stronger sound when each is a different frequency, but both vibrate at the same time

12.  Velocity: the speed of a sound wave, it’s  1 130 feet per second at sea level and 70 degrees F. Sound increases in velocity by 1 foot for every 1 degree F.

13.  Dynamic Range: The range of difference between the loudest and quietest sound a vibrating object makes (in dB).

14.  Each frequency has a wavelength, determined by the distance a sound wave must travel to complete one cycle of compression and rarefaction.

15.  The length of one cycle = velocity of sound / frequency of sound

16.  Acoustical phase: the time relationship between two or more sound waves at a given point in their cycles. If they began at the same time, they’ll be “in phase” and reinforcing each other. If the two waves don’t begin at the same time, their degree intervals won’t coincide and they’ll weaken eachother.

17.  Timbre: tone quality or “colour” of a sound

18.  A sound’s envelope has four stages : attack, initial decay, sustain and release (ASDR)


4)“The Projections of Sound on Image”  + 'The Three Listening ModesChion
-Sound can link separate shots together.

- Sound contributes an “added value” to images. Which can be described as the expressive and informative enrichement which sound brings to images. This information obviously has a direct correlation with what is seen and contained in the image.


Value Added by Text

-Sound in film is voco- and verbocetric. When in a sound environment the sound of voices capture and focus your attention before any other sounds.


Value added by Music

-On one hand music can directly express the feeling of the scene ( sadness, happiness, movement etc). We call this empathetic music.

-On the other hand can underline the juxtaposition of a scene. We call this anempathetic music.

- There is also music which falls outside of both of these, this music has an abstract meaning or the simple function of presence with no particular emotional resonance.



- Anempathetic effect can also be produced by sounds (eg: a violent scene such as Psycho and the sound of the shower running).
Influences of Sound on the Perception of Movement and Perception of Speed

  • Sound, contrary to sight, presupposes movement from the outset.

Difference in Speed of Perception

  • The ear analyzes, processes and synthesizes faster than the eye. Eg: a fast visual movement will not form a precise picture, whereas in the same time frame the sound trajectory will succeed in outlining a clear and definite sound.

Sound for “spotting” Visual Movements and for Sleight of Hand

  • Fast action or special effects in movies are “spotted” by rapid auditory punctuation (Shouts, bangs, etc) that mark certain moments and leave a strong audiovisual memory.

  • Sound superimposed onto the image is capable of leading our attention to a particular visual trajectory.

  • Sometimes it succeeds in making us see the in the image a rapid movement that isn’t even there (eg: Star Wars, they used the sound of the automated doors opening with the image of a closed door and then an open door, thus creating the illusion of the motion of the door opening).

The Ears Temporal Threshold

  • We dont recognize sounds, until shortly after we have perceived them.

Influence of Sound on the Perception of Time in the Image

  • 3 aspects of temporalization: First temporal animation of the image (sound renders the perception of time in the image as exact, detailed, immediated and concrete or vague , fluctuating and broad), Second Temporal linearization of sound (shot B would not necessarily follow from shot A but synchronous sound imposes a sense of succession. And Third sound vectorizes or dramatizes shots, orienting them toward a future foal or creation of feeling.

Conditions Necessary for Sound To Temporalize Images

  • First case= the image has no temporal animation or vectorization in itself (eg:rippling water, sound can bring the image into temporality that it introduced entirely on its own).

  • Second case= the image itself has temporal animation (eg:movement of characters or objects).

  • Temporalization also depends on the type of sound present. Density, texture, tone quality, a sound can temporally animate an image to a greater or lesse degree.

  • A smooth and continuous sound is les “animating” than and uneven or fluttering one.

  • A sound with a regular pulse is more predictable and tends to create less temporal animation than a sound that is irregular and thus unpredictable.

  • A rapid piece of music will not necessarily accelerate the perception of the image.

  • A sound rich in high frequencies will command perception more acutely.

Sound Cinema is Chronography

  • Sound temporalizes the image not only by effect of added value but also by normalizing and stabilizing film projection speed. The sound cinema can therefore be called “chronographic”: written in time as well as in movement.

  • The addition of realistic diegetic sound imposes on the sequence a sense of real time. A sense of time is linear and sequential.

  • Sounds are vectorized, it reconstitutes the texture of the present thus confirming a noticeable difference between moving from past to future.

Reciprocity of Added Value: The Example of Sound of Horror

  • Added value works reciprocally. Sound shows us the image differently than the images shows alone, and the image likewise makes us hear sound differently than if sound were ringing out in the dark.

  • In a torture scene fr instance, we sometimes see nothing but a pair of legs for example, however the sounds of screaming create the scene.

  • The same sound can be joyful in one context and intolerable in another.

The Three Listening Modes

  • There are at least 3 modes of listening each of which address different objects. Causal listening, semantic listening and reduced listening.

  • Causal listening is most common. When the cause is visible sound can provide supplementary information about it.

  • We rarely recognize a unique source exclusively on the bassi of sound we hear out of context. The human individual is probably the only cause that can produce a sound, the speaking voice, that characterizes that individual alone.

  • At the same time a source we are closely familiar with can go undefined. We can listen to a radio announcer without knowing any physical attributes or name etc.

  • Sometimes we do not recognize an individual or unique source but rather a general category such as human, mechanical or animal cause.

  • Sound often has not just one source but numerous. Eg: writing with a felt pen, the first two sources are the pen and paper. Hand gestures also count as well as the writer himself are also sources. If the sound is recorded the loudspeaker and the audio tape are also sources.

Semantic listening- that which refers to a code or a language to interpret a nessage (often spoken language).

- causal listening to a voice is to listening to it semantically as perception of the handwriting of written text is to reading it.



Reduced listening- focuses on the traits of sounds itself separate from its cause and meaning.

-In reduced listening the descriptive inventory of a sound cannot be compiled in a single hearing.

- an example of reduced listening is loking at the pitch of a sound. It is independent of the sounds cause or meaning.

- reduced listening opens our ears and sharpens our power of hearing.

- These three listening modes overlap and combine in the complex and varied context of the film soundtrack.
5) JOHN cage silence
Everything we hear is a noise, silence rain etc, all that surrounds us is noise and with it we can make rhythms that trespass our imagination

Most of what seems new is always an imitation of the old (first cars all looked like the first one, the carriage). New instrument play old music, “shielding us from new sound experience”

The composer: organizer of sound

Percussion music: transition from keyboard-influenced music to all-sound music of the future

John cage work first judged has experimental (What can be seen as experiental: the work prior to the finish product, as a sketch of a painting before it is completed )

His work later on was critiqued as controversial. Critique: his work is really music at all

-There is always something to see, something to hear, there is no empty space. Ex: Being in a anechoic chamber ( a chamber without echo’s nor sound ) you hear your nervous system and blood circulation. Until you die, there will always be sounds, and therefore a future for music

-Any sound may occur in any combination and in any continuity.

Advances were made in different countries WW2 ( Germany France, Italy, Japan ) in the field of tape recording ( higher quality ), making room for new creations ( by juxtaposing different sound pieces for example )

Particular sound in space created by: Frequence or Pitch, amplitude or loudness, overtone structure or timbre, duration and morphology ( how the sound begins, goes on and dies. )

Changing any of these changes the would sound.

Magnetic tape offers a new possibilities, The sounds created are do not have to be guided by rules of rhythm, harmony etc, since music can occur wherever and whenever, the possibilities are somewhat endless.

Control sound = sound that is manmade ( constructed by rules ) and transcend human sentiments

Let the sound be itself, and discovering sounds becomes a whole new aspect, takes a whole new meaning

Experimental music (music composed in such a way that its outcome is unforeseeable) is makes theorist think about what they hear and try to come with theories and explanations for it.( ex: Doesn’t a mount evoke a sense of wonder, hoe can you explain that ). Sounds themselves may strike the listener and does not leave him feelingless

Artist in this new “experimental music” use chance operations.( creating some and letting the people viewing it elaborate their own conclusion).

Similar to the maker of a camera letting somebody else use it and take pictures with.

Pieces played with multiple tape recordings, they make unique sounds at an unique time, making the pieces themselves (experience of the piece too) interesting for both the composer and the creator. These pieces are comparable to the leaves from a tree, they might look similar but none of them are identical ( making them unique )

A harmony of which one is unaccustomed would be the moto of these pieces.

Questioning our own understanding of what an harmony


6) The Ambient Century

 

-Phonograph in 1877 by Edison



-Gramophone in 1885 by Bell & Trainter

-Edison improves the phonograph to compete with the gramophone

-In 1888 Emile Berliner invents a way to record to discs

-After WWII the 78 rpm discs that were the standard started to get replaced by 33⅓ discs and the sound fidelity started increasing

-33⅓ discs could hold more audio than 78rpm discs..25 instead of 5

-In the 50’s Tape recording started appearing, creating a boom in the recording industry

-The invention of Magnetic tape was essential for recording

-Composers such as John Cage started experimenting with tape in many different ways

-1958 RCA invents the cassette recorder

-1970s vynil records became very artful in the hands of bands like The Beatles

-1970’s Sony invents the DAT(Digital Audio Tape) which has a higher fidelity than tape

-1982 cassettes are more popular than vinyl records

-Later on vynils are mainly left to collectionors and other enthusiasts

-These enthusiast go on to sample the records to create different styles, such as : Ambient, House, Techno and HipHop

KEYBOARDS, SYNTHESIZERS AND COMPUTERS.

-Synth: Instrument that could create by synthesis a different tones and frequency to create a new sound.

Synth start by being machines with indentations printed on paper from there a valve system with air pressure is developed to create sounds then it moves to transistor(electric current like the theremin) and tape loops(mellotron) then finally circuit boards like the synths we have now

-1850s The idea of speech synthesis is explored by german scientists

-1876 Elisha Gray creates a proto-synth that makes sound used in morse code transmissions

-1915 Audion Piano simulates a piano

-1920s the Theremin is invented

-1923 Sound wavesare converted into light impulses that are then translated into sounds

-1935 the electric Organ is introduced by Lorens Hammond

1962- MELLOTRON.. 1ST Sampling device invented by Bill Fransen & Birmingham Bros

1948 – Invention of the transistor.

            - Robert Moog – 1950s – with Raymond Scott dev.

1964 – Moog modular system as the 1st great analog Synth.

1970 – Mini MOOG created due to demand.

1970s – circuit board switches took over wire connectors.

1973 – Dr John Chowning cracked open the idea of FM

1965 – John cage created the idea of “Cybersonics”

1968 – Bell telephone labs created voice for HAL’S voice in 2001 Space Oddysee .

1970 –1st polyphonic(multi tone) synth In Canada by Hugh le Caine

Donald Buchla – Invented the concept of “Sequencing”: storing an array of notes and playing them back.

Late 1970s - MIDI developed.

1986 – AKAI S900 Sampler developed

Early1980s – Arrival of CD=High fidelity.

1982 – 1st domestic CD player made

1996 – 1st recordable CD available to the public

2002- DVD. By Panasonic Sony/Philips



7) SHAPIRO “MODULATIONS”
TIMELINE – IMPORTANT DATES AND PEOPLE IN SOUND/MUSIC/TECH DEVLOPMENT.

1876- Graham Bell invents phone (converts sound to electronic signals)

1877- Edison invents phonograph

1906 – Canadian Thaddeus Cahill invents teleharmonium (first electronic instrument)

Lee Deforest (USA) invents triode

1907 – Composer Ferrucio Busoni writes Sketch for a New Aesthetic of Music (about new scales and electronic music)

1913 – Italian futurist Russolo issues “The Art of Noises” which calls for futurist musicians to substitute the limited tones of the day with the infinite tones of noises of appropriate mechanisms.” Invented noise intoner.

1915 – Deforest invents oscillator, produces tones from electronic signals. Basis for all electronic tone-generating instruments.

1920 – Guest and Merriman record burial service of “Unknown Warrior at Westminster Abbey.” First recording with ordinary volume microphone.

Leon Theremin develops the theremin (original name). First practical electronic instrument.

1926 – Anthell’s Ballad Mechanique, which included doorbells and airplane propellers, causes a sensation in Paris.

1928 – Martenot invents the ondes martenot; uses keyboard, ribbons and knobs to produce electronic tones.

1935 – Hammond creates electric (Hammond) organ.

Nazis develop tape recorder as propaganda tool.

1939 – John Cage creates Imaginary Landscape No. 1. A composition for recordings of pure frequency tones played on variable-speed turntables.

1946 – Atomic test (Bikini atoll, S. Pacific) broadcast on US radio.

1948 – Bell Labs develops vocoder (electronically transforms voice). Bequeaths hits to Joe Walsh, Peter Frampton, and other musical legends.

1949 – Pierre Shaeffer creates Symphonie pour un Homme Seul, first music concrete piece. First to take advantage of magnetic recording tape.

1951 – Les Paul pioneers overdubbing and speeds up tape of his guitar solo (on “How High the Moon?”). People thought he was shredding the guitar. But he wasn’t.

1954 – Guitar Slim and Johnny “Guitar” Watson introduce guitar distortion on The Things That I Used to Do and Space Guitar.

1956 – RCA engineers Olden and Belar introduce the RCA Sound Synthesizer

1958 – Varese premiers his collage of electronic noise and airplane sounds, Poeme Electronique, while Xenakie premiers Concrete PH, a score for burning charcoal (both at World’s Fair – Brussels)

1961 – Guitarist Grady Martin unintentionally develops Fuzz Box effects pedal with his busted-amp solo

1967 – Subotnick records Silver Apples of the Moon using Buchla’s touch-pad synthesizer.

1968 – First completely synthesized record, Carlos’ Switched on Bach, is released. Becomes best selling classical LP of all time.

1971 – Little Roy’s “Hard Fighter” includes “Voo-doo,” instrumental featuring drop-out and echo. First dub record.

1977 – Holy trinity of synthesizer records is released: Donna Summer’s “I Feel Love,” Parliament’s “Flashight,” and Kraftwerk’s Trans-Europe Express.

1978 – First polyphonic synthesizer, Sequential Circuit Prophet 5, comes to market

1981 – Kraftwerk releases Computer World, invent techno

Grandmaster Flash invents hip-hop DJing.

1982 – Afrika Bambaata, Arthur Baker and John Robie give Kraftwek afros and Adidas on Planet Rock. Yamaha’s DX-7 introduces digital technology to music world. Roland releases TR-808 (drum machine) and TB-303 (bassline machine).

1983 – Jesse Saunders releases “On and On,” first Chicago House record.

1985 – Juan Atkins releases “No UFOs” creates the blueprint for techno. MC ADE makes bass-heavy music explode in Miami

1986 – Marley Marl popularizes sampler with “Eric B For President.”



1992 – Goldie’s “Terminator” introduces timestretching to hardcore Techno and lays foundation for Jungle.
Download 57.94 Kb.

Share with your friends:




The database is protected by copyright ©ininet.org 2024
send message

    Main page