The Concise Encyclopedia of Applied Linguistics. Carol A. Chapelle

Читать онлайн.
Название The Concise Encyclopedia of Applied Linguistics
Автор произведения Carol A. Chapelle
Жанр Языкознание
Серия
Издательство Языкознание
Год выпуска 0
isbn 9781119147374



Скачать книгу

actually listen to them, comic books have much in common with audiovisual products and the process of their translation involves similar constraints. Comic books consist of a series of framed images with dialogues contained in speech and thought bubbles linked to characters' mouths in such a way that evokes real dialogue. Furthermore, much of the conventional language in comic books has a highly aural flavor reflected in words, often placed outside speech bubbles, such as “boom!,” “vroom!,” “zoink!,” and “zzzzzzzz.” Graphic frames and dialogues come together to create a narrative that unfolds in real time rather like that of a film. So, although comic book images are static, readers are able to imagine speech and noise while following the sequential framework. Thus they can be placed on the interface between print texts and screen products such as films and video games. Significantly, there is a strong tradition of comic characters that subsequently developed into filmic, animated, or both filmic and animated form (e.g., Batman, Spiderman) while the late 20th century saw the expansion of traditional Japanese comic books, manga, into a new form of animated cartoon known as anime which have since flourished into a global industry; for example, Pokemon and Dragon Ball (see Zanettin, 2008, 2014).

      The main modalities for screen translation of fictional products are dubbing and subtitling. Traditionally, Western Europe has been divided into a subtitling block that included Scandinavian and Benelux countries, Greece and Portugal, while the so‐called “FIGS” countries (France, Italy, Germany, and Spain) made up the dubbing block. However, nowadays the situation is no longer so clear cut. The spread of DVD technology followed by widespread cable and Internet services highlighted the cost‐effectiveness of subtitling that allowed this modality to enter many dubbing strongholds as an alternative. Furthermore, many cinemas in dubbing countries now also offer screenings with subtitles while digital television provides viewers with the choice of both modalities. In addition, political entities such as Wales, Catalonia, and the Basque country choose dubbing as a support for minority languages (O'Connell, 1996; Izard, 2000) while Scandinavian countries which traditionally only dubbed children's television programs, now also dub some programs for adults (Gottlieb, 2001a). English‐speaking countries tend to prefer subtitling for the few foreign‐language films that enter these markets which tend to be restricted to educated arthouse cinema audiences (Chiaro, 2008, 2009a). Outside Europe, dubbing is strong in mainland China, Japan, Latin America, and Québec, while subtitling is the preferred mode in Israel, Hong Kong, and Thailand.

      In the early 20th century, the birth of talking film and the rise of Hollywood led producers to come to terms with the issue of marketing their products in different languages. Initially producers inserted short dialogues in the target language within the English dialogues, but when this proved to be unsatisfactory with audiences, they began producing multiple‐language versions of the same film. Paramount Pictures, for example, set up a large studio in Joinville, France, dedicated to the production of these multiple versions which, however, turned out to be economically unfeasible. The idea of substituting the original voice track with one in another language is generally attributable to the Austrian film producer Jakob Karol, who in 1930 realized that the technology to do this was already available (see Paolinelli & Di Fortunato, 2005, pp. 45–6). At first, dubbing into European languages was carried out in the USA; Hal Roach famously had Laurel and Hardy read off prompts in French, German, Italian, and Spanish, but by the early 1930s each European country had begun to set up its own dubbing industries.

      According to Danan (1991, p. 612) “dubbing is an assertion of the supremacy of the national language” and is often linked to régimes wishing to exalt their national languages. Indeed, it is not by chance that Austria, Germany, Italy, and Spain should opt for dubbing over subtitling while France may well have chosen dubbing to perpetuate its well‐established tradition of caring for the French language and protecting it from the onslaught of anglicisms.

      Traditionally, the entire process of dubbing a film was overseen by a project manager aided by an assistant who was responsible for negotiating costs, timescales, and general organizational aspects of the process. Dubbing a film began with the literal, word for word, translation of the script. Next, a “dubbing‐translator” adapted the translation so that the new target language utterances sounded natural and were in sync with the lip movements of the actors on screen. Dubbing‐translators did not need to be proficient in the source language but they did need to be talented in scriptwriting in the target language so as to render the new dialogue as natural and credible as possible. In the meantime, the dubbing assistant would divide the film into “loops” or short tracks and begin organizing studio‐recording shifts for the various actors or voice talents. Once recording began, actors watched the film and listened to the original soundtrack through headphones while reading the translated script. However, actors would be free to modify the translated script as they felt fit. The completed recording of the dub was finally mixed and balanced with the international track and musical score. This artisan approach is, however, being largely replaced by digital technology, which does away with the need to prepare reels of celluloid into short tracks and for voice talents to perform in a recording studio, as hi‐tech allows actors to record from their personal workstations while software will take care of editing different tracks together. Moreover, advances in technology are such that facial and lip movements of actors on film can now be modified to synchronize with the movements of the target language, while other software programs are able to match the voice quality of the original actor with the recording of the translation giving the impression that it is the original actor speaking (Chiaro, 2009b).

      Subtitles consist of “the rendering in a different language of verbal messages in filmic media in the shape of one or more lines of written text presented on the screen in sync with the original message” (Gottlieb, 2001b, p. 87).

      Subtitles are an abbreviated written translation of what can be heard on screen and are known as “open” when they are incorporated onto the film itself and as “closed” when chosen by the viewer from a DVD or teletext menu. At film festivals subtitles are generally projected live onto the screen in real time.

      Subtitles considerably reduce the actual dialogue simply because viewers need the time to read them without running the risk of missing any of the action on screen (Antonini, 2005, p. 213). Furthermore, ideally, viewers should be unaware of the fact that they are reading and be able to simultaneously watch the film, read the subtitles, and enjoy it. The subtitling process involves three basic steps: elimination, rendering, and condensation. Elimination consists in reducing elements that do not change the meaning of the source dialogue such as false starts, repetitions, and hesitations. Rendering refers to the elimination of taboo items, slang, and dialect and condensation involves the simplification of original syntax in order to render the subs more easily readable (Antonini, 2005, pp. 213–15). Traditionally a technician carries out the spotting or cueing process that involves marking the transcript of