Essays24.com - Term Papers and Free Essays
Search

The History of Machine Translation

Essay by   •  April 24, 2019  •  Research Paper  •  2,721 Words (11 Pages)  •  1,056 Views

Essay Preview: The History of Machine Translation

Report this essay
Page 1 of 11

The History of Machine Translation

TECHIN 522 A Wi 19: The History And Future Of Technology

[pic 1]

Xingyu Pan

03/16/2019

Introduction

In this research paper, I will explore the history of machine translation (MT), an important branch of Natural Language Processing(NLP). NLP, also known as Computational linguistics, integrates linguistics with AI to improve the interactions between computers and human (natural) languages[1], in other words, to let computers understand the words we say and write.

NLP developed from one problem. When digging into that problem, researchers defined and found more relevant tasks. and those tasks then derived more and more tasks. All of those tasks and problems constitute NLP[2].  Among them, machine translation is the original one. The history of machine translation dates back to 1950s, the beginning of the Cold War. In 1954, the IBM 701 computer translated more than 60 Russian sentences into English[3]. Although those sentences were carefully selected and the translation method is no better than a phrasebook, it stood for the birth of NLP and MT. In the same year, The journal Mechanical Translation, the ancestor of Computational Linguistics, started publishing.

After that, many countries including Japan, Germany, Canada, Britain, Franch, and China started their own research on machine translation mainly for military reasons. In the very beginning, most scholars worked on such research with great enthusiasm. One reason is that they were engaging a brand-new but difficult task which combines linguistic knowledge and computer algorithms. Besides, it was their first time to use computers, such a digital machine to process non-digital data[1]. At that time, the age of Computer was no more than 20. No higher-level languages like C/C++, extremely slow processing speed,  very limited storage, and restricted access to machines brought too many difficulties to the research. In 1967, even the best machines needed to take more than 7 minutes to analyze long sentences. In 1966, one famous US ALPAC report claimed that machine translation was expensive, inaccurate, and unpromising[5], which led to funding cuts for MT research in the US in the following decade.

From 1970 to now, with the development of AI algorithms, the rising of computation power, and the access to more data, machine translation stepped through 3 stages: rationalism(1968~1990), empiricism(1990~2015), and deep learning(2015~)[3]. These stages are quite similar to the three stages of AI, mentioned by Henry Brighton[4], which is Symbolic AI, Connectionist AI, and New AI. The graph below[5] shows important progress of machine translation through those stages. I will unfold them in details in the following sections.

[pic 2]

Rationalism

From 1960 to late 1980s, “the belief that knowledge of language in the human mind is fixed in advance by generic inheritance, dominated most of NLP research. These approaches have been called rationalist ones ”[3] Machine translation is undoubtedly one of them. In the early 1970s, The first commercial machine translation systems came out.  Such systems, based on a large collection of linguistic rules, were classified as Rule-based machine translation (RBMT)[6]. In 1984, Japanese scientist Makoto Nagao suggested Example-based machine translation (EBMT), which uses parallel texts of source and target languages as its main knowledge at run-time[7].

Rule-based machine translation

Rules-based Machine Translation (RBMT) is machine translation systems based on linguistic rules about source and target languages. These rules allow words to be placed in different places in the sentence and to express different meanings in different context. RBMT uses a number of different rules at three different stages: analysis, transfer, and generation, which include morphological analyzers, part-of-speech taggers, syntactic parsers, bilingual dictionaries, transfer rules, morphological generator, and reordering rules etc. They are developed by programmers and language experts with great effort[6][8].  EUROTRA,Systran, PROMPT, Japanese MT systems are the most famous RBMT systems at that time. Today, other common systems include GramTrans and Apertium[5][8].

As RBMT is not flexible at all and heavily depends on the number of lingual rules, it is rarely used now unless some specific cases such as the weather report translation. There are three types of RBMT: Direct machine translation, transfer-based machine translation, and interlingual machine translation.

Direct machine translation

It’s the very first version of RBMT, using very simple language rules. When getting an English sentence as an example, it will split the sentence into words first, look up a bilingual dictionary to translate those words into the target language, and finally correct the morphology, and harmonize syntax using some simple rules to make the translated text looks correct. Such an approach is so unreliable that it can hardly get reasonable translation results. Currently, no systems adopt this approach to do translation tasks.

Transfer-based machine translation

Compared with direct machine translation, transfer-based machine translation is no longer focusing on word translation, but on grammatical structures of sentences. The process can be also divided into three steps. First, the system analyzes the original sentence and split it into grammatical structures. Then, it transfers those structures into the target language using a pre-built lingual rule set, i.e. a kind of dictionary. The last step is morphology correction [9]. Nowadays, transfer-based machine translation is the most widely used method of RBMT, But there are still a lot of disadvantages and problems. A significant disadvantage is that it requires much more lingual rules than direct machine translation as grammatical structures are much more than single words in any languages.[5]

Interlingual machine translation

Interlingual machine translation systems consist of two “translation” processes. First, the original text is transformed into an interlingua, which is an abstract language-independent representation. Next, such intermediate representation can convert into any target language. Each process can be regarded as a kind of transfer-based machine translation. This idea is not a new one. It dates back to the 17th century when Descartes and Leibniz raised theories of how to create dictionaries using universal numerical codes [10].

...

...

Download as:   txt (18.3 Kb)   pdf (316.8 Kb)   docx (744.3 Kb)  
Continue for 10 more pages »
Only available on Essays24.com