Pages

Showing posts with label discussion. Show all posts
Showing posts with label discussion. Show all posts

Tuesday, January 13, 2026

Lexical versus interpretive fidelity--DRAFT

Machine translations of English to Czech have proven disappointing, not that this isn't a problem with other language pairs.

Even though Czech speakers depend upon DeepL and (very few) other translation services, they can judge the quality of machine translation. However, isn't it true that, for the most part, once meaning has become clear with the aid of the Czech rendered, albeit not perfect, that is enough? The meaning-maker can and arguably does move on once that goal is reached, and then discards and proceeds with life beyond the source and translated texts.

If the purpose on the other hand is not just expedient meaning-making but accuracy of expression to be compared favorably with the original (that is, the source), and that translation will be shared (e.g., published) with a critical and sensitive audience of whatever age, background, etc., human intervention is required--perhaps mandatory. Authors would no doubt demand the best translations of their work, wouldn't they? Don't they?

AI and machine translations, it is said, have not been able to replicate the fully human side of communication. Implementations fail at the requisite interpretative expertise of qualified language professionals.

Recently, DeepL failed to translate to translate a  children's story properly from the original English. (You would think that this would be a straight-forward, simple and correctly executed task.) Two problems Czech proofreaders pointed out: One, the Czech translation followed the sentence structure of the English; therefore not authentic as Czech is spoken/used. Two, it failed to nest the story in "the „colours" of the native language intended for a children's book."  

(In another post here, literature's challenge to AI to reach or match the original composition has been pointed out by professional translators in The Netherlands and elsewhere.) 

This then becomes the next question: Can one prompt a better-than-average AI service to do its best to achieve what basic translation fails to achieve per the native speaker? if, that is, one can articulate what it is that the service needs to focus on? Or in the words of one children's book author, "are we close to somehow matching human creativity?"

Now along comes ChatGPT with this prompt.

How can I prompt you to translate something from one language to another and keep what the novelist, for example, crafted? That is, I assume literature is harder to translate than, say, a business letter or legal contract where literal meaning is paramount. A novel, for example, will have allusions, well-turned phrases, and so on.


[begin response]]

You’re exactly right: literary translation is a *different task* from informational translation. The key is to prompt for **interpretive fidelity** rather than **lexical fidelity**.

Below is a practical way to think about prompting, followed by concrete prompt templates you can reuse.

---

## 1. What you’re really asking for

A good literary translation tries to preserve several layers at once:

1. **Meaning** (what happens, what is said)
2. **Voice** (tone, register, narrator’s personality)
3. **Effect** (what the passage does to the reader)
4. **Craft choices** (rhythm, imagery, metaphor, ambiguity)
5. **Cultural signals** (allusions, idioms, social cues)

A literal translation prioritizes #1.
A literary translation tries to balance all five.

Your prompt should *explicitly say* that this is the goal.

---

## 2. The single most important instruction

> **Tell me to act as a literary translator, not a literal one.**

That one sentence changes everything.

Bad prompt:

> “Translate this from Spanish to English.”

Much better:

> “Translate this as literary prose, preserving voice, imagery, and emotional effect rather than word-for-word accuracy.”

---

## 3. A strong general-purpose literary translation prompt

You can reuse this almost verbatim:

> **Prompt**
>
> “Translate the following text from **[Language A]** into **[Language B]** as a *literary translation*.
>
> Priorities:
>
> * Preserve the author’s voice, tone, and rhythm
> * Recreate imagery and metaphor naturally in the target language
> * Favor equivalent expressions over literal phrasing
> * Maintain ambiguity and subtext where present
>
> Avoid stiff or overly literal language. The result should read like an original literary work in **[Language B]**.”

This tells me:

* *What kind* of translation you want
* *What to sacrifice* if tradeoffs are necessary

---

## 4. When you want even more control

### A. Ask for register and style matching

> “The original is understated, ironic, and slightly archaic. Reflect that in the translation.”

or

> “The prose is spare and rhythmic, in the style of early Hemingway.”

This helps preserve *feel*, not just content.

---

### B. Ask me to explain difficult choices (optional)

This is extremely useful for literary work:

> “After the translation, briefly explain any major interpretive choices or places where a literal translation was avoided.”

This keeps you in control as an editor or adaptor.

---

### C. Ask for multiple variants (power move)

Professional translators do this.

> “Provide two translations:
>
> 1. A close literary translation
> 2. A freer, more idiomatic version
>    Briefly note the differences.”

This lets you choose phrasing the way a novelist would.

---

## 5. Handling allusions, wordplay, and untranslatables

You can explicitly instruct how to treat them:

> “If the text contains wordplay or cultural references that don’t transfer cleanly, adapt them creatively rather than explaining them.”

—or—

> “Preserve the original reference even if it feels foreign; do not domesticate it.”

Those are *opposite* philosophies, and translators debate them endlessly. Your prompt decides.

---

## 6. If the author matters

If the text is clearly in dialogue with a known tradition or author:

> “Translate this as if the author were writing originally in **[target language]**, but without modernizing the voice.”

or

> “This is 19th-century prose; avoid contemporary idioms.”

---

## 7. A minimal but effective version

If you want something short you can type quickly:

> “Translate this literarily, not literally—preserve voice, imagery, and emotional effect.”

That alone already works surprisingly well.

---

## 8. One honest limitation (important)

Even the best literary translation is **an interpretation**. There is no neutral or “correct” version—only defensible choices. The more guidance you give about tone, era, and priorities, the closer the result will be to what *you* value in the original.

If you like, you can paste a short literary passage (even a paragraph), and I can:

* show how different prompts change the outcome, or
* translate it and annotate the craft decisions line by line.

That’s often the most illuminating way to see this in action.

[end response]

Given this comprehensive response and guidance, and had the children's story author and colleagues alluded to above known and used this, would the machine translation have (almost) met the experts assessment for a final copy?

That is the next project, which will still (opinion!) require competent language users to sign off on final versions. 

Here is the agenda for the next developments in inter-linear translation conversion tools.

Tuesday, December 23, 2025

Fable Cottage

See https://inter-linear.blogspot.com/2025/12/fable-cottage-subtitled-text-online.html for the link to an online subtitled text readings site. Well worth a look, and hang out for a while to experience all their free stuff, if not subscribe. Great application of interlinear-like reading, learning, etc.
jkm

Saturday, September 20, 2025

Environment and AI solutions

Soubor:Google Gemini logo.svg 

20.09.25, ref. per Gemini

Question. Which is more friendly to the environment, for example in terms of electricity usage, carbon footprint, etc., client-side or server-side AI solutions? 

Response. In general, client-side AI solutions are more environmentally friendly than server-side solutions, but the answer is complex and depends on many factors. Client-side solutions, which run on a user's local device, eliminate or reduce the need for massive, energy-intensive data centers for inference (the use of a trained AI model). However, the training of these models, regardless of where they are ultimately deployed, remains a major environmental concern.

***

### Server-Side AI: The Environmental Burden

Server-side AI relies on large-scale data centers, which are one of the biggest contributors to the environmental footprint of AI. These data centers consume immense amounts of electricity for three main purposes:

* **Training and Inference:** The process of training a large language model (LLM) like GPT-4 can consume a staggering amount of energy. While a single query (inference) uses a tiny fraction of that energy, the sheer volume of billions of daily queries for popular models adds up to a massive cumulative carbon footprint.
* **Cooling:** Data centers generate a tremendous amount of heat. To prevent hardware from overheating, they require sophisticated and highly energy-intensive cooling systems, which can also consume millions of gallons of water annually.
* **Idle Capacity:** To ensure high availability and responsiveness, data centers must maintain a degree of idle capacity, which still consumes energy while waiting to handle traffic spikes.

The carbon footprint of server-side AI also depends on the **location** of the data center and its energy source. A data center running on a grid powered by renewable energy like wind or solar will have a significantly lower carbon footprint than one relying on fossil fuels.

### Client-Side AI: The Greener Alternative

Client-side, or on-device, AI runs directly on a user's device, such as a smartphone or laptop. Because the processing happens locally, it avoids the energy consumption and carbon emissions associated with data transmission and large data centers.

While client-side AI is more efficient for **inference**, it's crucial to understand a few key points:

* **Training is still a major factor:** The initial training of the AI model, whether it's for on-device or server-side use, is the most energy-intensive part of the process. This training is almost always done in a data center.
* **Limited Capabilities:** On-device models are typically smaller and less powerful than their server-side counterparts. This is because consumer devices lack the massive computational power and memory of a data center.
* **Device-level Consumption:** While they bypass the data center, on-device models still use a device's battery and CPU, which consumes energy. This can lead to faster battery drain and generate some heat.

### A Nuanced Comparison

Ultimately, the choice between client-side and server-side AI for a given application is a trade-off between performance and environmental impact. For computationally demanding tasks like training a large generative AI model, a server-side solution is currently the only option. For tasks that can be performed by smaller models, such as basic text completion or image classification, client-side AI is a much more energy-efficient and environmentally responsible choice.

As on-device hardware becomes more powerful and AI models are optimized to be smaller and more efficient, we'll likely see a shift toward more client-side solutions, reducing the reliance on energy-hungry data centers for many everyday AI applications.

Musing futures

Long-form interlinear translations are available online, that is ones not having to do with biblical or scriptural texts. Fiction mostly, but there is some non-fiction to be found. The interlinear examples, many published for online or physical book reading, show words/phrases aligned with the source's text. This is a first step toward getting closer and closer to contextual meaning, e.g., based on historical understanding of language items, hermeneutics, etymology of a word, ngram (frequency of word usage over time), etc.

Which of the current tools included in the set of experiments shows promise for use in creating long-form interlinear translations (of course, here we refer to the softened understanding of interlinear here termed inter-linea). For discussion purposes, see https://jkm.free.nf/inter-linea/w4word.html.

A product of the above word/phrase approach based on character count limitation per line is below, without entirely correct alignment and no glossing, but serviceable I believe. If so, perhaps a more rigorous(?) translation service might be employed, something like DeepL? Then the 
word/phrase approach (versus others on the same site) might function as a prototype for long-form translations . . . to publish?

In essence, Bohr and Heisenberg argued that science had finally caught up with the conceptual problems involved in the description of reality that philosophers had been warning of for centuries. Bohr is quoted as saying: "There is no quantum world. There is only an abstract quantum physical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature." This vaguely positivist statement was echoed by Heisenberg:. "[W]e have to remember that what we observe is not nature in itself but nature exposed to our method of questioning." 

Their broadly anti-realist "Copenhagen interpretation" "denying" that the wave function represents the real physical state of a quantum system quickly became the dominant way of thinking about quantum mechanics. More recent variations of such anti-realist interpretations suggest that the wave function is simply a way of "coding" our experience, or our subjective beliefs derived from our experience of the physics, allowing us to use what we've learned in the past to predict the future.

The word/phrase display.

In essence, Bohr and Heisenberg argued that
Im Wesentlichen argumentierten Bohr und Heisenberg das

science had finally caught up with the conceptual
Die Wissenschaft hatte endlich das Konzept eingeholt

problems involved in the description of reality
Probleme, die an der Beschreibung der Realität verbunden sind

that philosophers had been warning of for
dass Philosophen vor gewarnt hatten

centuries. Bohr is quoted as saying: "There is no
Jahrhunderte. Bohr wird mit den Worten zitiert: "Es gibt keine

quantum world. There is only an abstract quantum
Quantenwelt. Es gibt nur ein abstraktes Quantum

physical description. It is wrong to think that
Physische Beschreibung. Es ist falsch zu denken, dass

the task of physics is to find out how nature is.
Die Aufgabe der Physik ist es, herauszufinden, wie die Natur ist.

Physics concerns what we can say about nature."
Die Physik betrifft das, was wir über die Natur sagen können. "

This vaguely positivist statement was echoed by
Diese vage positivistische Aussage wurde von wiederholt von

Heisenberg:. "[W]e have to remember that what we
Heisenberg:. "[Wir müssen uns daran erinnern, was wir

observe is not nature in itself but nature
beobachten ist nicht die Natur an sich, sondern die Natur

exposed to our method of questioning."
unserer Befragungsmethode ausgesetzt. "

Their broadly anti-realist "Copenhagen
Ihr weitläufig anti-realistischer "Kopenhagen"

interpretation" "denying" that the wave function
Interpretation "" leugnen ", dass die Wellenfunktion fungiert

represents the real physical state of a quantum
repräsentiert den wirklichen physischen Zustand eines Quanten

system quickly became the dominant way of
Das System wurde schnell zur dominierenden Art von

thinking about quantum mechanics. More recent
Denken Sie an die Quantenmechanik. Neuer

variations of such anti-realist interpretations
Variationen solcher antirrealistischer Interpretationen

suggest that the wave function is simply a way of
Schlagen Sie vor, dass die Wellenfunktion einfach eine Möglichkeit ist

"coding" our experience, or our subjective
"codieren" unsere Erfahrung oder unsere subjektive

beliefs derived from our experience of the
Überzeugungen abgeleitet aus unserer Erfahrung der

physics, allowing us to use what we've learned in
Physik, damit wir das verwenden können, was wir gelernt haben

the past to predict the future.
die Vergangenheit, die Zukunft vorherzusagen.

An enhancement to this display might be to show the target before the source, where one wanted to create an inter-linea for more than personal use. And one of the two might be in a smaller font or colored in some way to reduce distraction/dependence on one rather than the other language.