The translation workflow described in one of the previous chapters includes many new features. One of the newest, most sophisticated, and most important features that has not been covered yet is machine translation. Machine translation is intended to lower the costs of translation and to streamline the process of translation. The description and explanation of machine translation opens some interesting questions: Is machine translation a benefit or a hindrance to translation and the translator? Does it have any limitations? What can a translator expect of machine translated texts?
Vít Baisa states that: “Strojový překlad je obor počítačové lingvistiky zabývající se návrhem, implementací a aplikací automatických systémů (programů) pro překlad textu s minimálním zásahem člověka”1 (6). This definition is accepted also by W. John Hutchins in his article “Machine Translation: A Brief History” in the collection Concise History of the Language Sciences: From the Sumerians to the Cognitivists: “The term machine translation (MT) refers to computerized systems responsible for the production of translations with or without human assistance” (431). However, in reality the machines (the engines) do not translate without human involvement. Humans create the linguistic corpora used by machine translation engines and human translators have to edit the output – correct the resulting translated text. In fact, contemporary machine translation is a form of pre-translation of files containing the source texts.
Hutchins also mentions that MT systems are not intended for general use:
“The idea of using computers to translate or help translate human language is almost as old as the computer itself. Indeed, MT is one of the oldest non-numeric applications of computers” (Trujillo 4).
The foundations of machine translation were laid already in the pre-computer era. Scholars, philosophers, and later linguists searched already in the seventeenth century for tools and methods how to overcome linguistic barriers and to share meaning of statements in the form of some language neutral representation.
The 1930s represent the first important milestone in the development of machine translation with the ground-breaking term “translating machines” and two men claiming credit for the ideas of mechanization of translation, Georges Artsrouni and P. P. Troyanskii. In 1933, the Russian inventor Petr Petrovich Troyanskii (1894–1950) proposed a mechanical translation device for translating from one language into another. The motive for his pioneer efforts was the time-consuming work with dictionaries which he wanted to automate. Troyanskii proposed “not only a method for an automatic bilingual dictionary, but also a scheme for coding interlingual grammatical roles (based on Esperanto) and an outline of how analysis and synthesis might work” (Hutchins 2009).
The 1940s were characterized as the era of information boom (radio and TV broadcasts) after World War II and the birth of computers that were successfully employed in deciphering encrypted messages during World War II.
Many researchers consider Warren Weaver’s letter (memorandum) to be the real beginning of the MT research in the United States. Warren Weaver was the director of the Rockefeller Foundation’s natural sciences division, and his letter contained some issues and ideas associated with machine translation (e.g. ambiguity, cryptography, and properties of individual languages). In the course of the following decades the principles of machine translation were the object of scientific research and were discussed predominantly on academic grounds. Early MT systems made use of bilingual dictionaries and very simple rules of adjusting the words into target languages. However, the syntactic rules were too complex and the output of MT did not meet the expectations of scientists and was rather disappointing (Hutchins 2009). The ALPAC (Automatic Language Processing Advisory Committee) Report from 1966 suggested that machine translation did not produce useful results, was not cost-effective and considered it to be slow and inaccurate. This report meant a temporary termination of MT research in the United States.
During the 1970s the research continued in Canada, Europe (especially in France and Germany), and Japan and brought some significant successes represented by the Canadian system Météo translating the weather forecasts and reports in 1976 and the system SYSTRAN that was used by the government agencies in the United States and was employed by the Commission of the European Union for translating large volumes of texts (and has been the official machine translation system of the European Commission since 1976) (Yang and Lange 276). It was available for Dutch, English, French, German, Portuguese, Russian, and Spanish. The output of machine translation was not intended for the general public, it was used by scholars and a small group of specialists who needed immediate access to information and did not pay attention to the raw form of translated documents.
The demand for machine translation systems grew in the following decades and reached the powerful commercial sphere, where “the demand was now for cost-effective machine-aided translation systems that could deal with commercial and technical documentation in the principal languages of international commerce” (Hutchins, 2009).
The new millennium saw already powerful translation engines running on personal computers and available remotely on the Internet (online translation services such as Babelfish and Google Translate), widespread use of translation tools and translation memories (their use by large companies led to a rapid advancement in the area of software localization), and the shift from rule based systems to systems based purely on statistical methods.