Like-Blog
Presenting you the most interesting translation solutions
Why Like-Blog? Now, first of all, this blog is a blog that you should like (and read regularly) – at least, if you are interested in translation. Then, the topic discussed here is one in which the meaningful likeness between a text and its translation in the language pair English-German plays a key role. On this page, I will take a close look at some interesting translation solutions that I have come across in the course of my work as a translator and translation scholar.
A translation solution is only as good as the arguments that support it. This means that any translation criticism, whether positive or negative, needs to be justified. The quality of a translation solution shows only when we compare it to other possible translation solutions in a given translation situation. Therefore, a translation critic should not only say why a translation solution is bad, but also demonstrate what a better solution might look like. I will try to stick to these principles of translation criticism. So if you have any questions regarding my line of argument or if you disagree, please, let me know your opinion by phone at +49 4171 6086525 or by e-mail to bittner@businessenglish-hamburg.de. So much for the introduction. I hope you’ll enjoy reading this blog!
Artificial intelligence (December 2024)
No, this blog post is not about translation errors, as they occur again and again in automatic translations using artificial intelligence. This blog post is about three small experiments that I conducted in the summer of 2024 as part of a training course. One topic of this training course dealt with the question of how AI applications can be used for language teaching. In this context, I tested whether these applications can correctly recognise the grammatical structure of a text.
AI applications for languages generate their output on the basis of probabilities of language patterns occurring in countless corpora with which the AI applications have been trained. It is, therefore, to be expected that, in the case of grammatical ambiguity, the variant that occurs more frequently will be favoured. Were these expectations fulfilled in my experiments?
The starting point for the experiments was the following sentence:
“It can be incredibly difficult for a disabled student to get a wheelchair on to a salt marsh,” he says. “But if the learning aims are being immersed in an environment, and making discoveries, VR can achieve that.” (Business Spotlight 6/19, p. 64)
If you read this blog regularly, you may recognise this sentence. Its grammar is ambiguous at one point – with a more likely and a less likely interpretation. However, for the sentence to make sense, the less likely grammatical option must be chosen. I explained the problem in detail in my blog post of November 2019. The issue is whether “are being immersed” is a verb form (i.e., the passive progressive form of the verb “immerse” in the present tense), or whether “being immersed” is to be understood as a gerund (a verb used as a noun), with “are” being the finite verb. Both options are grammatically possible, but only the second option makes sense semantically. As implied in my blog post of November 2019, and as I have noticed several times myself, the first option seems to be grammatically more natural to native speaker readers – at least until they realise that this option makes little sense.
In my first experiment, I tested how ElevenLabs handles this sentence. ElevenLabs can be used to convert written text into an audio file. The result is an audio text that is absolutely perfect in terms of pronunciation and intonation – as if spoken by a native speaker. I had ElevenLabs convert the entire paragraph with the above sentence as the last sentence into an audio text, choosing a male American voice for the output. The semantically correct intonation would require a small pause between “are” and “being immersed”. However, there was no such pause. The (presumably more likely) first option was chosen, although it makes no sense.
For my second experiment, I had ChatGPT translate the relevant paragraph into German. This translation will not be discussed in detail, here; the important thing is that the sentence in question was translated correctly. This result was rather unexpected, as I had assumed that (as with ElevenLabs) the first grammatical option would be favoured. Had ChatGPT correctly grasped the meaning of the source sentence and, therefore, produced a correct translation?
In my third experiment, I continued to probe the capabilities of ChatGPT. With a clear and friendly prompt in English, I asked the AI application to explain the grammatical structure of the last sentence to me, focussing in particular on main and subordinate clauses as well as sentence structures (subject, verb, object). The result was astonishing: ChatGPT identified only “are” as the finite verb and gave “being immersed” the syntactic status of an object. Well, strictly speaking, it is a subject complement, but that is not important for our purposes. What matters is that “are being immersed” was not identified as the verb of the if-clause.
Whoever thinks that ChatGPT is an AI application that can be relied on – at least in a language context – should take note. If ChatGPT gets two identical prompts from two computers at the same time, it generates different output.