Subject : multimedia


PART A Define AUDIO MIXER



Download 435.21 Kb.
Page3/6
Date29.04.2017
Size435.21 Kb.
#16730
1   2   3   4   5   6
PART A

  1. Define AUDIO MIXER

This is used to record multiple track of sound at a time and edit individually. You have more control for volume speed mute. Overall volume can be adjusted. Special effect like echo chorus and panning can also be done.

  1. What is the use of WAV Files

WAV files can be used an intermediary storage type for saving songs from cassette tapes.

  1. Phonemes are the smallest distinguishable sounds in the dialect of a language.



  1. Two types of sound effects natural and synthetic.



  1. Musical instrument digital interface is a protocol or set of rules for connecting digital synthesizers to each other or to digital computers.

PARTB

1. Note on ACOUSTICS

Sound is a form of energy similar to heat and light. Sound is generated from vibrating objects and can floe through a material medium from one place to another. When an object starts vibrating or oscillating rapidly a part of its kinetic energy is imparted to the layer of the medium in contact with the object, E.g.: the air surrounding a bell.

Sound Source

Soumnd


Sound waves

Observer



(Propagation of Sound)

The Acoustics is the branch of science dealing with the study of sound and is concerned with the generation, transmission and reception of sound waves. Some main disciplines are…,



Architectural Acoustics: The study of sound in buildings like halls, auditorium etc...

Bio-Acoustics: Sound made by animals is refers as bio acoustic, E.g. Elephant, bats and lion.

Psycho Acoustics: The hearing perception and localization of sound related human beings.

Physical Acoustics: Interaction of sound with materials and fluid are known as physical acoustics. E.g. the shock waves travel through solid and liquid portion of earth’s crust.

Musical Acoustics: It is concerned with the study of sound in relation to musical instruments.

Ultra Sonic: It is concerned with the study of high frequency sound beyond human hearing range.

2.Describe the NATURE OF SOUND WAVES

The sound is an alteration in pressure, particle displacement or particle velocity propagated through elastic material. As sound energy propagated through the material medium it sets up alternate, regions of compression and rarefaction by shifting the particle of the medium.

The upper part denotes positive peak which represents compression. The lower part denotes the negative peak which represents rarefaction.

Compressions



Rarefaction

Amplitude y

A Crests or +ve peak

Time t


Trough or –ve peak

(Representation of Sound waves)

As the actual represents disturbances of the medium particles of the medium particles from their original place it cannot exist in vacuum. The sound waves have the characteristics.



  • Longitude Waves

  • Mechanical Waves

Longitude Waves:

The direction of propagated of a sound is the same as the direction in which the medium particle oscillates. E.g. Light. The wave moves in direction perpendicular to the oscillation particle.



Mechanical Waves:

There are capable of being compressed and expanded like a spring. When they are compressed the peak comes closer and when expanded it moves further.



  1. Compression: Compression denotes the frequency of sound increases and it denotes high pitch.

  2. Expansion: Frequency of sound decreases and denotes dull and flat.

This affect is called Doppler and enables us to understand whether the sound source coming towards us or away.

3.Short note on LOUDSPEAKERS

The device that convert the electrical energy back to the acoustic and function just opposite to that of a microphone. The electrical signal of amplifier is fed into loudspeakers based on the loudness.



Dynamic speakers:

It is based on the coil and paper cone. Cone is made of paper or fiber called Diaphragm and it is near the magnet. When the current is passed through the coil the magnetic field is generated around the coil. This oscillates the diaphragm in same frequencies of electrical signals and reproduces the same sound.



Woofer and Tweeters:

The loudspeakers are divided into smaller units that are tuned for frequency range this units are called woofers. The unit are called mid- range frequency 400 Hz to 4 kHz. It is used for high frequency range is called tweeters. 4 kHz and 20 kHz. Low frequency is called as bass and high frequency is called treble.



4. Write the BASICS OF STAFF NOTATIONS

The staff is simply five horizontal parallel lines in which we write the music. A note is open dot or letter “o” turned on its side can have tail or flag depending on the value.

Note represents a single tone series of tone is called Melody. Clefs provide the reference point need to know what the nodes on a staff means. Three types of reference G, F, K simple things are considered in clef and commonly called us treble clef. It placed on the lowest line of the staff. They are 12 nodes to the western music.

They are labeled using the first letters of the alphabets. The remaining nodes are labeled by modifying the letter by adding the sharp (#). 2 thing can be noted the reputation of nodes on the keyboard and the long name for the little black keys.



5.Note on AUDIO AND MULTIMEDIA

Types of Audio in a Presentation:

Speech: It an important element of human communication and can be used effectively to transmit information. Two types of speech are available digitized and synthesized. Digitized speech provides high quality, natural speech but requires significant disk storage capacity. Synthesized speech on the other hand is not as storage intensive but may not sound as natural as human speech.

Music: It is used to set tone or mood, provide connections or transistors, add interest and excitement and evoke emotions.

Sound effects: They are used to enhance or augment the presentation of information or instruction. Two types of sound effects natural and synthetic. Natural sounds are the common place sounds’ that occur around us. Synthetic sounds are those that produced electronically or artificially.

Reasons for Using Audio:

Redundancy is the key to better communications. It defines redundancy as the transmission of the same or closely related information to the receiver or learner through two sensory channels. They have 2 conditions..,



  • Information presented in each mode should be congruent, i.e. similar and not contradictory.

  • Identical presentation of words in sound and text should be avoided.

The second reason of including content is connected to motivation. Multimedia developer needs to be concerned about simulating the user’s motivation in all parts of the program.

6.Write VOICE RECOGNITION AND RESPONSE

Enable computers to understand spoken word and respond verbally too. These capabilities are known as Voice Recognition and Voice Response. These can be used as input/output mechanism to conjunction with the normal I/O devices like the keyboard and mouse.

Voice recognition or discrete recognition as being able to handle either a small or large vocabulary and as either speaker dependent or speaker independent. Small vocabulary size implies less than 1000 words or phrases while large vocabulary means larger numbers of words or phrases. Phonemes are the smallest distinguishable sounds in the dialect of a language. In voice response area system are divided into synthesized versus digitized speech. Synthesized speech system allows a much larger vocabulary but their quality is limited by that of the synthesized hardware/software.

PART C

1.Explain the FUNDAMENTAL CHARACTERISTICS OF SOUND

Sound wave is associated with the following physical characteristics: amplitude, frequency, waveform and speed of propagation.



Amplitude:

The amplitude of a wave is the maximum displacement of a particle in the path of the wave. It denotes measures of peak to peak height of the wave. The amplitude of the wave is generated by the energy of the wave. The amplitude refers to the loudness of the sound is measured by decibel (dB).Larger the energy of the sound wave is more than amplitude and loudness perception.



Wave 1: Higher amplitude

Wave 2: Lower amplitude

(Two waves of different amplitudes)

Frequency:

It measures the number of oscillations/vibrations of a particle in the one second. The frequency of the sound wave is generated by the pitch of the sound. High pitch refers to a higher frequency. Low pitch refers to a lower frequency. the frequency is measured in hertz(HZ) i.e.., number of vibration per second.

The audible frequency refers to ultra sonic or super sonic. If the frequency is higher than sonic are referred to as ultra sonic while frequencies lower than sonic are called infra-sonic or sub-sonic.

Wave 2: Higher frequency

Wave 1: Lower frequency

(Two waves of different frequencies)

Waveform:

The actual shape of sound wave is represented pictorially. The shape of the wave form can be sinusoidal, square, triangle. Complex sounds are represented in irregular shapes and the waveform is generated by the quality of the sound. Different waveform will have different hearing perception.



Speed:

The additional features of the sound waves is the speed it depends on the medium in which it travel and the temperature of the medium. i.e. 340 m/sec in air and 1500 m/sec in water.



2.Explain the MICROPHONE

Microphone records sound by converting acoustic energy to electrical energy. The sound pressure exists as pattern of air pressure. The microphone changes the pattern to electrical current. Microphone may be classified to various categories. Based on the constructional features may be two types moving coil type and condenser type.



Dynamic Microphone:

The moving coil type has a thin metallic diaphragm and an attached coil of wire the magnetic field surrounds the coil and when the sound enters the diaphragm it makes the coil move within the magnetic field. The current is produced proportional to the intensity and sound hits.



Condenser Microphone:

In this the diaphragm is actually the plate of a capacitor. The sound on the diaphragm moves the plate and generates voltage. A current flow in an attached wire which in this case too is proportional to the intensity of the sound on the diaphragm.

Condenser

Plate


Diaphragm Diaphragm

+

Sound Electrical Sound +



Waves Coil terminal Waves +

Magnet +


  1. Moving coil type microphone (b) Condenser type microphone

Based on direction of microphone it is divided into three types they are..,

  1. Omni-Directional:

  2. Bi-Directional

  3. UNI-Directional

OMNI-Directional:

This microphone is sensitive to sounds coming from all directional. It consists of a container open at the one end and the diaphragm inside it. Sound form all the directions enter the opening impinge on the diaphragm and causing it vibrates. This vibration is translate into electrical signals.

90ºDiaphragm

0º 180º


270º

BI-Directional:

Sounds coming from the 2 sides are recorded, front and rear. It records 2 sources of sound at a time. The container diaphragm has 2 opening on the opposite sides. Sound produce on the front 0º will enter through the 1st opening and vibrates. The 180º sounds striking the opposite sides.

Diaphragm

Sound Sound



UNI-Directional:

It records sounds from single source and similar to bidirectional with one exception. The rear side has a resistive material like foam or cloth near the diaphragm. It absorbs the energy coming from the rear opening. The sound produced at the front of the microphone strikes the diaphragm directly from and part of it goes back and it is reduces by the resistive material.

Diaphragm

Resistive

Sound Material Sound

Polar Plot:

It is a graph plotting the output level of microphone against the angle at which incident the sound is produced. In Omni directional it produced the output equal for all angles. Whereas in bi directional, output is maximum for sound coming from 0 and 180º. In uni-directional output is maximum at front and minimum at rear. It decreases but no zero value at sizes.



3.Detailed note on AMPLIFIER

The device in which varying input signal controls the flow of energy to produce an output signal that has no longer amplitude. The ratio of the output amplitude to the input amplitude is known as gain of the amplifier. Amplifier circuits are designed as A, B, AB and C for analogue design and D and E for digital designs.



Class-A:

Class-A amplifiers use 100j of input cycle for generating the output. These amplifiers are not generally used as audio amplifier nowadays due to the generation of large amount of heat which necessitates large heat sinks for their dissipation.

Class-A gives the best quality sound due to the fact that the transistor operates on the most linear portion of its characteristic curve which produces minimum distortion of the output wave.

Class-B and Class-AB:

Class-b amplifiers only use half of the input cycle for amplification. These are used in RF power amplifier where distortion is unimportant. This produce good efficiency but suffer from a small distortion at the point where the two halves are joined, known as crossover distortion.

The elements are biased in such a way that they operate over a linear portion of their characteristic curve during a half cycle, but still conduct a small amount of current in the other half. This arrangement is called Class-AB.

Class-C:

Class-C amplifier use less than half of the input cycle for amplification. They produce a huge amount of distortion, they are the most efficient. They can be used in situation where other auxiliary equipment can be used to reduce the distortion produced.



Negative Feedback:

The distortion is perceived as noise at the output. Though good design can reduce the distortion, there is a limit to how much it can be reduced. One way of reducing distortion further is to introduce a negative feedback. The negative distortion combine with the positive distortion produced subsequently by the amplifier with the result that the output signal is more or less linear. This is a popular mechanism of reducing noise in the output signal.



Class-D:

Class-D digital amplifier uses series of transistors as switches. The input signal is amplified and converted to digital pulse using an ADC. The pulse is then used to switch the transistors on and off. The current from the transistors are fed to a summing network to be produced at the output.



Class-E:

It uses Pulse Width Modulation (PWM) to produce output waves. Width is proportional to desired amplitude. This requires single transistors for switching and is therefore cheaper than the others. It requires a fast clock to enable the switching operations.



4.Describe the DIGITAL AUDIO

Converting analog quantity to digital audio by using sampling quantization, codeword generation.



Practical Audio Sampling Parameter:

According to nyquist theorem the sampling frequency has to be twice of the input frequency. In practical the frequency of 44 to 48 kHz is employed in sampling. This is true only when we want to represent all the frequencies the human ear respond to and is used to mainly for recording musical instruments.



Aliasing:

Violating the sampling theorem the higher audio frequency is less than the nyquist frequency. If the audio frequency is greater than the nyquist frequency. We get erroneous signals that appear in the band with stabilizing. This effect is called the aliasing effect.



Bit Rate and File Size:

Increase in the sampling resolution requires increased file size. higher storage space high processing power is needed.



Streaming Audio:

It is used for downloading files on the internet the music starts playing when the receiving buffer memory receives the file, while the remaining portion continues to download simultaneously. The popular types are real network and real audio is streaming audio.

The popular software macro media shocked wave is one of the streaming audio software. Text information can be stored gives better performance. It uses UDP (User Datagram Protocol). Transfer the file across the network.

HIFI (High Fidelity):

Reproduction of sound and image as original quality. RIAA (Recording Industry Association America) is introduced by Dan Shane field in 1974.



  1. RIAA Equalizer:

For correcting the play back of records. Equalization is a process of modifying the frequency envelope of sound. Peaking equalizer changes frequencies around a central point, while shelving equalizer changes a wide range of frequencies by a fixed amount. Graphic equalizer which consists of series of band filters with independent gain controls for each.

RIAA specification defines a curve called RIAA equalization curve, which is a plot of the amplitude in dB against frequencies in HZ.



5.Detailed Note on SYNTHESIZERS

It is an electronic instrument that allows generating digital samples of sound of various instruments. In the absence of original instruments, a special IC chip is used for producing the sounds.



Types of Synthesizers:

Synthesizers can be classified into two categories:



  1. FM Synthesizers:

It generates sound by combining elementary sinusoidal tones to build up a note having the desired waveform. Waveform can be decomposed into its elementary sinusoidal components of varying amplitude and frequency. Early days synthesizers were of FM type, which could provide a variety of sounds, the sound trend to lack the depth or realism of real-world sounds.

  1. Wavetable Synthesizers:

It produces a sound in retrieving high quality digital recording of actual instruments from memory and playing them on demand. The sound associated with synthesizers is called patches and the collection of all patches is called as patch map. Each sound in the patch map must have a unique ID number to identify during playback.

Characteristics of Synthesizers:

Ability to play more than one note at a time. It records polyphonic note i.e. more number of nodes are recorded (16,24or 32). A synthesizer is said to be multitimbral if it is capable of producing two or more different instrument sounds simultaneously. If a synthesizer can play five notes simultaneously and it can produce a piano sound.



6.Explain MUSICAL INSTRUMENT DIGITAL INTERFACE (MIDI)

Musicians were creating new and different sounds worldwide. The musical world began to recognize the synthesizers as a legitimate musical instrument. Performers desired to “layer” their new sounds creation to play two sounds together to create a “Larger” sounds.



MIDI

The musical instrument digital interface is a protocol or set of rules for connecting digital synthesizers to each other or to digital computers. Much in the same way that two computers communicate via modems, two synthesizers communicate via MIDI. The information exchanged between two MIDI devices is musical in nature.



MIDI manufactures association (MMA)

Technical and administrative issues related to MIDI specification are handled by the MIDI manufacture association (MMA) and Japan MIDI standards committee (JMSC). Since 1985 the MMA has produced 11 major specification based on MIDI that have enabled new products and new markets, the MMA is the only source for up to date MIDI specification, issues unique manufactures IDs and licenses logos that identify MMA standards.



MIDI Specification:

It is specified into three portions they are..,



  1. Hardware

  2. Messages

  3. File Formats.

HARDWARE:

It uses the five conducted cable to connect the synthesizers. First 3 pins are connected to the common unit. 4 and 5 are unused. Only MIDI cable is used for connecting and efficient data transmission. MIDI cables are more expensive than the standard 55 cables but it is reliable. The interface adapter is used to connect in one side and the familiar 25 pin PC serial connector and on the other hand two round 5 pin MIDI connectors.



MESSAGES:

It constitutes an entire music description language in binary form. Each word describing an action of musical performance is assigned a specific binary code. Sound a note in MIDI language you send a “Note On’ Messages, and then assign that note a “velocity”, which determines how fast a key is pressed or released, which in turn changes the sound produces. The messages are transmitted as a unidirectional asynchronous bit stream as 31.25Kbits/sec.



FILE FORMATS:

The MIDI specification made provisions to save synthesizers audio in a separate file format called MIDI files. These are extremely compact as compared to WAV files.



7.Explain the SOUND CARD

The sound card is an expansion board in the multimedia PC which interface with CPU through the motherboard. It is extremely connected to the speaker for playback of sound. It also used for digitizing, recording and compressing of musical files. The most famous sound card is the IBMS sound blaster.



Basic components:

The basic internal components of the sound card are ..,



  • Memory Band:

It refers to the local memory of the sound card for storing digital audio files. When the sound is to be played the audio is the first stored in the memory buffer before it is processed by DAC.

  • DSP (Digital signal processor):

The sound card never uses an architecture called Multipurpose DSP. It is the main controller of the Audio Signal. It has a special micro processors designed to meet the algorithms of the audio technology.

  • DAC and ADC:

Used for digitizing sound and reconverting to analog. The microphone takes an analog input to the sound card and it is converting to the digital form using ADC.

  • Wavetable /FM synthesizer Chip:

A MIDI synthesizer has to recognize the MIDI sequence recorded on a disk. The chip can be either FM synthesizer or Wave table. The FM chip can play about 20secs simultaneously to reproduce the original sound the MIDI chip.

  • CD Interface:

It is for the internal connection between the CD drive and personal computer and the sound card. It allows the audio CD to be played by processing the play button.

  • I/O Ports:

I/p port 1: MIC: Input port for feeding audio data to be sound card through the microphone connected to it.

I/p port 2: Line In: Input port for feeding audio data form external CD/cassette players for recording or playback.

O/P Port 1: Speakers: Output port for attaching speakers for playback of sound files.

O/P Port 2: Line Out: Output port for connecting to external recording devices like a cassette player or an external amplifier.

MIDI: Input port for interfacing with the external synthesizer. This is present in some cards like Sound Blaster. MIDI songs can be composed on the PC software and the can be sent to the sound modules of external synthesizers foe playback.



  • Processing Audio Files:

Wav files: The sound card receives the input from a microphone as a analog signal and goes to a ADC. This converts the signal to digital. It sends the binary data to the memory buffer from there it is send to DSP which compress the data, from DSP the data goes to PC main processors and finally saved in the hard disk.

MIDI files: to handle the MIDI the sound card requires the synthesizer that recognizes MIDI instructions. MIDI files are text files that can be created using required software. For playback files the data is sent by the CPU to the DSP.

The DSP receives the MIDI instructions passes it to a synthesizer’s chip which contains the audio samples. The synthesized sound is sent to the DAC and output comes via a Speakers.



8.Describe the AUDIO TRANSMISSION

To transmit the digital data between the devices they must be a communication channel and a clock for time intervals this interconnection requires audio format that can be recognized by both sender and the receiver.



AES/EBU: Audio engineering society/ European board casting:

It is the standard for carrying the digital audio signal between the devices in 1992. This standard specifies the format for digital transmission of two channels that are sampled and quantized uniformly through a twisted pair cables.



Sony Philips Digital Interconnect Format (SPDIF):

This is also used for transferring audio between devices. It was developed based on the AES/EBU standard. The XLR (subsequence Latch Rubber) connectors are substituted by the RCA jack or a optical TOSLINK both of which are easier to handle and less costly. The cable was changed from 110ohm balanced twisted pair to the 75ohm coaxial cable. A set of words for each sample of each channel is called a data frame.



Phone audio jack:

It is most common audio connector. Modern jack plugs are available in 3 sizes 2.5, 3.5, 6.5 millimeters. All the 3 versions are available in mono and stereo conductors. Stereo version uses 2 conductors for carrying left and right data’s and third conductor is ground.

The common uses of plugs are the jacks can be used for head phones, ear phones. Etc..,

RCA Jack:

This is a type of connector used for both audio and video for home appliances, developed by Radio Corporation of America (RCA). The plug consists of a central male connector surrounded by metal ring and found at cable ends. The female connector found on device consists of a central hole with ring of metal around it. The jack also has a small plastic ring which is color coded for the signal type: Yellow for composite video, red for right audio channel and white for left audio channel.



9.Explain the AUDIO RECORDING DEVICES

Phonograph Cylinder (1870s):

Thomas Alva Edison performed the first recording in1877 on a tinfoil wrapped around a rotating metal cylinder, known as the phonograph cylinder.



Gramophone Record (1895):

It is an analog sound recording medium consisting of a flat disc rotating at constant angular velocity with inscribed spiral grooves in which a stylus or needle rides. The groove was in the form of a spiral running from the outer edge towards the center.



Wire recording (1930s):

Analog audio recording was made onto thin wire known as wire recording. The wire was wound on reels and moving as 24 inches per second.



Reel to Reel tape Recording (1940s):

Magnetic tape is storage medium consisting of a coating of a magnetic material on a thin plastic strip. Actually there were two reels- the reel containing the tape initially called the feed reel, is mounted on a spindle, the end of the tape is manually pulled out of the reel.



Compact cassette (1963):

It is an analog storage medium introduced by Philips.



Microcassette (1969):

A smaller variation of the compact audio cassette, the micro cassette could only be recorded by microcassette tape records and used mainly for voice recording but rarely for music.



Compact Disc Digital Audio System (1979):

It is an optical disc used to store digital data, originally developed for storing digital audio. Developed mainly by Philips and Sony, the CD-DA stored spiral tracks of digital audio sampled at 44.1 kHz, and 16 bits, and came to be popularly known as Audio-CD.



Digital Audio Tape (DAT) (1987):

Smaller version of Compact audio cassette. The technology of DAT is closely based on that of video recorders, using a rotating head and helical scan to record data.



Digital Data Storage (DDS)(1989):

DDS is a format for storing and backing up computer data on magnetic tape that evolved from Digital Audio Tape technology which was originally created for CD quality audio recording.





Digital Compact Cassette: (1990s):

DCC players could play either type of tape, the basic idea behind this being backward compatibility: Users could migrate to digital recording without having to discard their old analog tapes.



Mini Disc: (1991):

It is based on storage devices for storing any kind of data but usually audio.



10.Explain the AUDIO FILE FORMATS AND CODECs

Wav (Waveform): WAV audio can also be edited and manipulated with relative ease using software. WAV files can be used an intermediary storage type for saving songs from cassette tapes.

AIFF (Audio Interchange File Formats: It is a generic file format developed by Electric Arts in 1985 in order to facilitate data transfer between software programs of different vendors. It is file format used for storing audio data on PCs.

MID (MIDI): MIDI files are textual files which contains instructions on how to play a piece of music. The files are very compact in size and ideal for web applications.

AU (Audio): It contains a header of six 32-bit words which defines the metadata about the actual audio data following it.

MP3 (MPEG Layer III): A highly compressed audio format providing almost CD-quality sound. It can be compressing a typical song into 5MB for which it is extensively used for putting audio content on the internet.

VOC (Voice): These files can contain markers for looping and synchronization markers for multimedia applications.

RMF (Rich Music Format): It is software based on high performance music and audio playback technology created by beatnik Inc (audio engine which is multi platform real time 64 voice software synthesizer fully compatible with general MIDI specifications.

11.Detailed Note on AUDIO PROCESSING SOFTWARE

It allow you to open edit manipulate transform and save digital audio sound files in various formats.



  1. Opening an existing sound files: An audio editor allow you to open an existing sound file and view in waveform. Most of the software will support the windows native audio file formats WAV and some additional file formats like AIFF may also be supported.

  2. Playing a File: After opening in an editor, the user would be allowed to play back the file. A playback head is usually seen moving across the displayed waveform while the corresponding sound is heard on the speakers.

  3. Playing Selected Portions of a File: The user will have to use mouse to select a specific portion of the waveform and click on the play button, instead of playing the entire file, only the selected portion is played.

  4. Accurately positioning the playback head: The user can select portions of the file by dragging with mouse pointer using eye estimation, The file by mentioning the time in hh:mm:ss format, and also specify a range by mentioning the start position and end position.

  5. Copying and pasting portion of a file: The user selects the portion of a file and then chooses the copy function where upon the selected portion is copied to the clipboard. This portion can then be place at a different point in the same file or in a different file.

  6. Saving a file: The use can save an edited file by specifying the filename and location.



  1. Using Cut, Trim and Undo functions: The cut function enables the user to select a portion of the audio file and discard that portion. The trim function allows one to select a portion of a file and discard the remaining portion. Undo function allows user to undo a number of previous steps.

  2. Mixing Sounds: Mix functions allow one to mix two sounds so that both of them are heard simultaneously and the total duration of the sound remains unchanged.

  3. Special Effects: Most of the audio editing software allows applications of filters for changing the nature of a sound clip in some pre- determined way, mostly for giving special effects.

  4. Removing Noise: Noise is any unwanted signal that gets mixed with original sounds due to various reasons e.g. because of environmental sounds, electrical sounds.


SUBJECT : MULTIMEDIA

CLASS : II MSC CS

UNIT : I

SEMESTER : 3

STAFF : P.RADHA

UNIT-IV - VIDEO



Download 435.21 Kb.

Share with your friends:
1   2   3   4   5   6




The database is protected by copyright ©ininet.org 2024
send message

    Main page