2.2.2Dubbing
As Kautský (1970) points out, the issue of film dubbing came up as soon as the first sound films or “talkies” were introduced. The first idea of translating these films for foreign audiences (without the use of subtitles) was by filming them again using the same set and scenario, but with foreign actors. This, however, turned out to be very expensive and time consuming, and the quality of the foreign versions was usually poor. Dubbing studios were then founded, producing translated films where only the sound was altered.
Early dubbing efforts were rejected by the audience anyway, since the poor quality and overall unnaturalness of the translated film made it quite dull to watch. Films were usually dubbed only by one person for all the characters and there was little lip synchronization. The audience in the early stage of dubbing was also skeptical that the actors in the film (e.g. Americans) speak the language of the viewers (e.g. Czech). This made dubbing unpopular and subtitles remained the preferred choice for translating a film, at least till the arrival of the television. TV channels in some countries (such as the French-, Italian-, German- and Spanish-speaking countries mentioned above) decided to broadcast dubbed films in order to relieve the viewers of reading subtitles and thus to make them feel more comfortable (Kautský 1970).
In times when there was still a lack of sound recording devices, dubbing was broadcasted live together with the film. The voice actors had to practice beforehand like it was a theater play rehearsal, and then, one actor per character, took turns in dubbing. This job was very stressful as there was no possibility to correct mistakes or to cut out noises like coughing etc. This was no longer an issue after the technology allowed the sound to be recorded, either continuously or in loops. In the first case, the voice actors continuously dubbed their whole speeches until they made a mistake, the film was rewound a few seconds and the actors continued from that point. The loop dubbing is recorded in smaller pieces, when all the actors whose characters appear in the particular part of the film are present and take turns in dubbing their characters (Bajerová, Škvorová, Tomíček 2002).
2.1.1.2.2.2Dubbing Procedures
The lip-sync dubbing is the most expensive and time-consuming form of audiovisual translation. According to Pérez Gonzáles (2009, p.13-21), "the involvement of so many professionals in the dubbing process explains why this form of audiovisual translation is up to fifteen times more expensive than subtitling," but "the actual translation and adaptation of the dialogue amounts to only 10 per cent of the overall cost."
As Pérez Gonzáles further describes, the earliest stage of dubbing is the raw translation of the source language dialogue list. Since the dialogue translator's skills usually do not include the adaption and adjustment of the translated dialogue, the raw translation is then passed to a dubbing adapter. The adapter's job involves altering the raw translation so that the dialogue matches the actor's lip movements as much as possible, depending on how close to the lips the shot was taken.
The final lip-sync dubbing should give the illusion that the studio actor’s voice belongs to the screen actor. This is provided, according to Robert Paquin (1998), by maintaining three kinds of synchronism: phonetic synchronism, semantic synchronism and dramatic synchronism.
To accomplish a phonetic synchronism, the sounds produced by the voice actor (both verbal and non-verbal, such as heavy breathing, sobbing, screaming, etc.) have to match the lip movements of the actor in the video. However, as Paquin (1998) says, it is almost impossible to fully synchronize the translation with the lip movements without the change of meaning.
The only requirements for a good phonetic synchronism (especially in close-up shots) are therefore maintaining the length of the dubbed speech and fitting the bilabial consonants. Bilabials (e.g. b, m, p) are distinctive for the lip movements and should be pronounced in the original and dubbed audio at the same time, although they are well interchangeable and which of the three bilabials is used in the adaption does not matter (Paquin 1998).
Since the dubbing translator’s main objective is that the translated speech has the same meaning as the original, semantic synchronism is a priority, although there are some cases where Paquin prefers phonetics over semantics. For instance, a number can be replaced with another number to achieve a better lip-sync if the number is relatively insignificant and if the overall meaning of the scene has not been changed.
In case of dubbed educational videos, where the choice of vocabulary has to be precise and any change of meaning could be harmful, the only phonetic rule that should be kept is to match the length of dubbed speech with the mouth opening. A voice must not be heard while the speaker's mouth is closed and vice versa.
It is also important that the dubbed characters are as realistic as with the original sound, i.e. dramatically synchronized. For example, if an actor talking on the screen nods his head, the sentence should be affirmative. What also should be taken into account when translating/adapting is the language level, use of idiomatic expressions and accents of the characters, although it is often a difficult task (Paquin 1998).
2.2.3To Sub or to Dub?
Both methods of film translating – subtitling and dubbing – have their pros and cons, concerning mainly the level of how much they interfere with the original text and the quality of the translation. As Szarkowska (2005, para. 4) points out, "dubbing is known to be the method that modifies the source text to larger extent and thus makes it familiar to the target audience through domestication." Ideal dubbing should therefore evoke the feeling that the actors originally speak the target language. Subtitling, on the other hand, avoids modification of the original dialogues as much as possible and tries to preserve the "foreignness" of the film (Szarkowska 2005).
The quality of dubbing is often a subject of criticism. Mark Betz (2013), for example, argues that dubbed films can be perceived as unnatural, since perfect synchrony between the translated speech and the lip movements is hardly ever achieved. The dubbed films also often suffer from bad voice acting and acoustics, and from excessive alteration of the original soundtrack. As Betz claims, "the vocal qualities, tones, and rhythms of specific languages, combined with the gestures and facial expressions that mark national characters and acting styles, become literally lost in translation" (Betz 2013, para. 1).
However, the claim that subtitling does not interfere with the original video is not necessarily true either. As the audience focuses on the subtitled text, they may miss certain features of the film, such as the background dialogues that are not subtitled due to spatial and temporal constraints. The viewers reading subtitles can also fail to notice facial expressions and other visual action happening on the screen while their eyes are distracted, or perceive them desynchronized with the script (Betz 2013).
What is also speaking against the use of subtitles is the translation of humor in films and TV series. Humorous elements are naturally often lost in translation, mostly when they are based on cultural references or linguistic features (e.g. word plays). According to a study done by Anna Jankowska (2009), Translating Humor in Dubbing and Subtitling (conducted on the film Shrek and its translations into Spanish and Polish), the amount of humorous elements lost in translation is approximately 5% in dubbing and 18% in subtitling. This difference is usually caused by visual humor based either entirely or partially on what can be seen on the screen, which the audience occupied by reading subtitles may miss.
Share with your friends: |