Pages

Sunday, March 8, 2026

I wonder?

My Guru [a philosophy student/teacher friend],


So I have been reading from your library, and I must say some of this stuff is very heady, that is one could easily get lost in mere exploration as well as just trying to discern what some of these men(!) are trying to say.

Now, whether you are aware of it or not, I have been working on an interlinear translation project for over a year. Put this together with the above and I get:

What if we had an application that took sentences or paragraphs from one of the works of one of your esteemed men and translated from the language given, example English, and with the assistance of an expert or [group of] curated commentators gave paraphrases or glosses of the words presented?

This would be like an interlinear "translation" of English to explanatory/more accessible English plus where needed a "gloss" of particular word/phrase meanings.

Does an expert or group of curated commentators exist to help accomplish this task? Yes, with qualification. An LLM, large language model, that thing that AI/chatbots rely upon to answer questions and prompts, is hypothetically an expert with ready access to huge amounts of words from all kinds of sources that could answer like an expert, one that would need to be proofread to make sure there is no hallucination or other error/omission creeping in to the translated response . . . I know, an omission creeping in is a contradiction, but you get the idea.

Is there a group of curated commentators available in the AI world to provide the same or finer-grained responses suitable for elucidating, making more accessible the sometimes difficult to understand/decode male marvels of today and yesterday? I believe so, at least one that holds that as a capability. Perplexity is perhaps one tool.

Can one ask Perplexity to decode sentence by sentence or paragraph by paragraph a given source text using say, a well-curated dictionary or encyclopedia? I don't see why not. Here is an example with which we can try this experiment.

I term its matter\ but that which so determines * the manifold of appearance that it allows of being ordered 2 in certain re- lations, I term the form of appearance. That in which alone the sensations can be posited and ordered in a certain form, cannot itself be sensation; and therefore, while the matter of all appearance is given to us a posteriori only, its form must lie ready for the sensations a priori in the mind, and so must allow of being considered apart from all sensation.

Given the above quotation, write a line by line translation into simpler English using as necessary the Stanford Encyclopedia of Philosophy as a reference. Format your response as an interlinear document where each source sentence is followed below it with the translation/gloss.

Here is the output from Perplexity.

[begin response]

"I term its matter"
→ I call this the **matter** of the appearance.

"but that which so determines * the manifold of appearance"
→ But that which structures the many elements of the appearance.

"that it allows of being ordered 2 in certain re- lations,"
→ In such a way that they can be arranged in specific relations to one another,

"I term the form of appearance."
→ I call this the **form** of the appearance.

"That in which alone the sensations can be posited and ordered in a certain form,"
→ That in which alone sensations can be placed and organized into a definite pattern,

"cannot itself be sensation;"
→ Cannot itself be a sensation,

"and therefore, while the matter of all appearance is given to us a posteriori only,"
→ And therefore, although the matter of every appearance is given to us only from experience (a posteriori),

"its form must lie ready for the sensations a priori in the mind,"
→ Its form must already be present in the mind before experience (a priori), ready to structure incoming sensations,

"and so must allow of being considered apart from all sensation."
→ And so this form can be thought about independently of any particular sensations.

[end response]

The sources for the above response are the following.

1.https://www.academia.edu/90445815/Kants_Ontology_of_Appearances_and_the_Synthetic_Apriori
2.http://depts.washington.edu/lsearlec/TEXTS/KANT/CR_PURE_R/2-AESTH.HTM
3.http://web.stanford.edu/class/history34q/readings/Kant/CritiqueAesthetic.html
4.https://plato.stanford.edu/entries/kant-transcendental-idealism/
5.https://en.wikisource.org/wiki/Critique_of_Pure_Reason/Volume_1/Part_1
6.https://hume.ucdavis.edu/phi175/appearance.html
7.https://web.stanford.edu/class/history34q/readings/Kant/CritiqueAesthetic.html
8.https://plato.stanford.edu/entries/kant-aesthetics/
9.https://en.wikisource.org/wiki/Critique_of_Pure_Reason_(Meiklejohn)/Volume_1/Part_1
10.https://plato.stanford.edu/archives/fall2008/entries/kant-aesthetics/

Now, as my philosophic guru, how did the experiment work out? or just ignore this sidetrack adventure and discard. No obligation to respond.

​For my part, I will continue to play with this idea. Beats other diversions to avoid Alzheimer's.

Side note: Perplexity nailed the source which was not given in the question/prompt.

With respect,
an unrepentant student

---
PS AI did not write or improve or approve of this missive.

[Comment. The source text should have been prepared for machine processing. For example, how can it interpret '/' or '*'? In addition, the prompt says line by line and by sentence. Which should it be? Sentence level should be the choice, for the translation is to make sense of sense units, and in a difficult passage, that is where the reader usually trips up. Of course, s/he can use a dictionary to decode the smaller bits. If the philosopher is using a term in a special sense, the dictionary or word/phrase gloss is not always helpful. Here the better machine translator, not a person in this case, would pick up the fact that something is being used in a special, perhaps wholly unique, sense. Common philosophical discourse uses a priori and the like, and these need not be interpreted, although in the above they are with the common term for the genre in parentheses, which helps understanding.]

Thursday, February 5, 2026

Whole French Language (reading and practice)

 

For an interesting example of interlinear reading for language acquisition, see https://www.youtube.com/watch?v=svUZWwqZkC4. Presentation based on The whole French language by Robertson, T. (Théodore), 1803-1871.

 
 Interlinear and bilingual texts are great ways to learn/acquire languages since through language we know they make the input in the target language comprehensible. Thus, one wouldn't be wrong at all to call these books a great source of comprehensible input, a term coined by Stephen Krashen. The book at hand, Whole French Language, 650+ pages total, written by T. Robertson and authorship of which dates back to mid-19th century, consists of such interlinear and bilingual text enriched with multiple practical exercises, grammar lessons and a synopsis of the French language attached to the end of the book.
 

For more on reading to acquire L2 based on theory and some research, see Stephen Krashen via these links.

https://en.wikipedia.org/wiki/Stephen_Krashen
https://en.wikipedia.org/wiki/Input_hypothesis 

Monday, January 26, 2026

Summary to date*

 
interlinear displays

Bilingual books, why not?

 AI generated answer to the conventional bilingual book format.

Bilingual book formats typically use
side-by-side text (facing pages or same page), alternating paragraphs, or a story in one language with key terms in another (glossary/footnote style), often with distinct fonts/colors for visual cues, allowing for direct comparison or reading in one language while learning the other. Common structures include the "flipped book" (one language then the other when reversed) and digital versions with select-able text or integrated translations. 
 
Common Formats
  • Facing Pages/Side-by-Side: The most common print format, with one language on the left page and its translation on the right, or one above the other on the same page.
  • Alternating Paragraphs: Text flows from one language to the next, often with a visual break or color change to separate them.
  • Flipped/Reversible Book: One language reads from front to back, and when flipped, the other language reads from back to front.
  • Piecemeal/Glossary Style: Primarily in one language, with key words or phrases in the second language, often with explanations in footnotes or an end glossary.
  • E-book Format: Can use a similar structure, often with the second language starting after the first, or interactive elements to show translations, notes this GitHub page. 
Design Tips
  • Font & Color: Use distinct fonts (serif/sans-serif) or colors for each language to help readers differentiate them quickly.
  • Proximity: Keep the corresponding texts close together to maintain the connection.
  • Structure: Use neutral chapter/section titles (e.g., "Part 1") to avoid confusion in bilingual tables of contents.
  • Clarity: Ensure paragraphs aren't too long to fit within the layout.
Question: Why not inter-linear as a possibility for formatting a bilingual book? 

Tuesday, January 13, 2026

Lexical versus interpretive fidelity--DRAFT

Machine translations of English to Czech have proven disappointing, not that this isn't a problem with other language pairs.

Even though Czech speakers depend upon DeepL and (very few) other translation services, they can judge the quality of machine translation. However, isn't it true that, for the most part, once meaning has become clear with the aid of the Czech rendered, albeit not perfect, that is enough? The meaning-maker can and arguably does move on once that goal is reached, and then discards and proceeds with life beyond the source and translated texts.

If the purpose on the other hand is not just expedient meaning-making but accuracy of expression to be compared favorably with the original (that is, the source), and that translation will be shared (e.g., published) with a critical and sensitive audience of whatever age, background, etc., human intervention is required--perhaps mandatory. Authors would no doubt demand the best translations of their work, wouldn't they? Don't they?

AI and machine translations, it is said, have not been able to replicate the fully human side of communication. Implementations fail at the requisite interpretative expertise of qualified language professionals.

Recently, DeepL failed to translate to translate a  children's story properly from the original English. (You would think that this would be a straight-forward, simple and correctly executed task.) Two problems Czech proofreaders pointed out: One, the Czech translation followed the sentence structure of the English; therefore not authentic as Czech is spoken/used. Two, it failed to nest the story in "the „colours" of the native language intended for a children's book."  

(In another post here, literature's challenge to AI to reach or match the original composition has been pointed out by professional translators in The Netherlands and elsewhere.) 

This then becomes the next question: Can one prompt a better-than-average AI service to do its best to achieve what basic translation fails to achieve per the native speaker? if, that is, one can articulate what it is that the service needs to focus on? Or in the words of one children's book author, "are we close to somehow matching human creativity?"

Now along comes ChatGPT with this prompt.

How can I prompt you to translate something from one language to another and keep what the novelist, for example, crafted? That is, I assume literature is harder to translate than, say, a business letter or legal contract where literal meaning is paramount. A novel, for example, will have allusions, well-turned phrases, and so on.


[begin response]]

You’re exactly right: literary translation is a *different task* from informational translation. The key is to prompt for **interpretive fidelity** rather than **lexical fidelity**.

Below is a practical way to think about prompting, followed by concrete prompt templates you can reuse.

---

## 1. What you’re really asking for

A good literary translation tries to preserve several layers at once:

1. **Meaning** (what happens, what is said)
2. **Voice** (tone, register, narrator’s personality)
3. **Effect** (what the passage does to the reader)
4. **Craft choices** (rhythm, imagery, metaphor, ambiguity)
5. **Cultural signals** (allusions, idioms, social cues)

A literal translation prioritizes #1.
A literary translation tries to balance all five.

Your prompt should *explicitly say* that this is the goal.

---

## 2. The single most important instruction

> **Tell me to act as a literary translator, not a literal one.**

That one sentence changes everything.

Bad prompt:

> “Translate this from Spanish to English.”

Much better:

> “Translate this as literary prose, preserving voice, imagery, and emotional effect rather than word-for-word accuracy.”

---

## 3. A strong general-purpose literary translation prompt

You can reuse this almost verbatim:

> **Prompt**
>
> “Translate the following text from **[Language A]** into **[Language B]** as a *literary translation*.
>
> Priorities:
>
> * Preserve the author’s voice, tone, and rhythm
> * Recreate imagery and metaphor naturally in the target language
> * Favor equivalent expressions over literal phrasing
> * Maintain ambiguity and subtext where present
>
> Avoid stiff or overly literal language. The result should read like an original literary work in **[Language B]**.”

This tells me:

* *What kind* of translation you want
* *What to sacrifice* if tradeoffs are necessary

---

## 4. When you want even more control

### A. Ask for register and style matching

> “The original is understated, ironic, and slightly archaic. Reflect that in the translation.”

or

> “The prose is spare and rhythmic, in the style of early Hemingway.”

This helps preserve *feel*, not just content.

---

### B. Ask me to explain difficult choices (optional)

This is extremely useful for literary work:

> “After the translation, briefly explain any major interpretive choices or places where a literal translation was avoided.”

This keeps you in control as an editor or adaptor.

---

### C. Ask for multiple variants (power move)

Professional translators do this.

> “Provide two translations:
>
> 1. A close literary translation
> 2. A freer, more idiomatic version
>    Briefly note the differences.”

This lets you choose phrasing the way a novelist would.

---

## 5. Handling allusions, wordplay, and untranslatables

You can explicitly instruct how to treat them:

> “If the text contains wordplay or cultural references that don’t transfer cleanly, adapt them creatively rather than explaining them.”

—or—

> “Preserve the original reference even if it feels foreign; do not domesticate it.”

Those are *opposite* philosophies, and translators debate them endlessly. Your prompt decides.

---

## 6. If the author matters

If the text is clearly in dialogue with a known tradition or author:

> “Translate this as if the author were writing originally in **[target language]**, but without modernizing the voice.”

or

> “This is 19th-century prose; avoid contemporary idioms.”

---

## 7. A minimal but effective version

If you want something short you can type quickly:

> “Translate this literarily, not literally—preserve voice, imagery, and emotional effect.”

That alone already works surprisingly well.

---

## 8. One honest limitation (important)

Even the best literary translation is **an interpretation**. There is no neutral or “correct” version—only defensible choices. The more guidance you give about tone, era, and priorities, the closer the result will be to what *you* value in the original.

If you like, you can paste a short literary passage (even a paragraph), and I can:

* show how different prompts change the outcome, or
* translate it and annotate the craft decisions line by line.

That’s often the most illuminating way to see this in action.

[end response]

Given this comprehensive response and guidance, and had the children's story author and colleagues alluded to above known and used this, would the machine translation have (almost) met the experts assessment for a final copy?

That is the next project, which will still (opinion!) require competent language users to sign off on final versions. 

Here is the agenda for the next developments in inter-linear translation conversion tools.

Monday, January 12, 2026

Convert side-by-side (parallel) translation display to inter-linear

You have source text and translation and you are looking at a side-by-side (parallel) display but would prefer, for whatever reason, to have an interlinear-like display.

One alternative is to prompt a chatbot (AI service) to make the translation and display source and target the way you want. 

Another is to take the text and translation and pop them into this tool for an inter-linear display.

  
See if it works for you. It can be used online and off-line. 


Tuesday, December 23, 2025

Fable Cottage

See https://inter-linear.blogspot.com/2025/12/fable-cottage-subtitled-text-online.html for the link to an online subtitled text readings site. Well worth a look, and hang out for a while to experience all their free stuff, if not subscribe. Great application of interlinear-like reading, learning, etc.
jkm

Tuesday, November 18, 2025

1st contact, conversation online

[Boilerplate for future online conversation
Boilerplate pro budoucí online konverzaci

partner.
partner.]


We can try online chat. Here is how it works.
Můžeme zkusit online chat. Zde je návod, jak to funguje.

You decide when you can meet online with me. Tell
Vy rozhodujete, kdy se se mnou můžete setkat online. Sdělit

me date and time. If I am available, I will tell
mě datum a čas. Pokud budu k dispozici, řeknu

you. I do not have meetings before 10 in the
Vy. Nemám schůzky před 10

morning.
ráno.

Five or ten minutes before the scheduled meeting,
Pět nebo deset minut před plánovanou schůzkou,

I will send you an email with a link.
Pošlu vám e-mail s odkazem.

You click on the link and enter the Google Meet
Kliknete na odkaz a vstoupíte do Google Meet

space. There we can see and talk with each other.
plocha. Tam se můžeme vidět a mluvit spolu.

While conversing, I make chat notes. After we say
Během konverzace si dělám poznámky do chatu. Poté, co řekneme

good-bye, I clean up the notes and send a copy
sbohem, uklidím poznámky a posílám kopii

for you to study or review.
abyste si je mohli prostudovat nebo prohlédnout.

These meetings are conversations, not classroom
Tato setkání jsou rozhovory, nikoli třída

lessons. So we talk about what you want to talk
lekce. Takže mluvíme o tom, o čem chcete mluvit

about. I ask questions. You ask questions. We
o. kladu otázky. Kladete otázky. My

tell lies. It goes like that till the end. It is
lhát. Tak to jde až do konce. To je

a 45 minute meeting. The first is always free.
schůzka na 45 minut. První je vždy zdarma.

When do you want to meet?
Kdy se chceš sejít?


==================================================
Display Order: Source first
Splitting: Character
Service: Google
Source Characters: 806
Translated Characters: 767
Source Words: 160
Translated Words: 134
Generated: 18 November 2025

Best practices text segmentation

Best practices for text segmentation in translation include using Computer-Assisted Translation (CAT) tools to break text into meaningful, logical units like sentences or phrases, ensuring segments are concise and retain complete units of meaning to fit short-term memory, and prioritizing consistency through defined rules and Translation Memories (TMs) to improve quality and efficiency. Additionally, proper source document formatting, including clear structure and avoiding unnecessary formatting, helps CAT tools parse content correctly and create clearer segments. 

Understanding Text Segmentation
  • Definition: Text segmentation is the process of dividing a source text into smaller, translatable units, called "segments". 
Best Practices for Text Segmentation

  1. Break into meaningful units: Segment text at natural linguistic boundaries, such as sentences, rather than arbitrary points like every 5-10 words.
  2. Keep segments concise: Segments should be short enough for a translator to easily retain the information in their short-term memory.
  3. Ensure completeness: Each segment should represent a complete thought or unit of meaning to avoid confusion and an unnatural translation.
  4. Utilize CAT Tools & Translation Memories: CAT tools, when configured with appropriate segmentation rules, help identify and manage these units. This promotes consistency and allows for the reuse of previously translated segments in a Translation Memory.
  5. Format source content well: Clear, well-organized source documents with consistent formatting (e.g., proper use of paragraph breaks, hard returns, and page breaks) ensure that CAT tools can parse the content correctly and produce unambiguous segments.
  6. Define segmentation rules: Establish clear rules, often in a format like SRX (Segmentation Rules Exchange), to define how text should be broken down for specific projects or language pairs.
  7. Perform automated QA checks: After segmentation and translation, automated quality assurance checks help identify and correct errors like misspellings or incorrect terminology, preventing them from being added to the Translation Memory.
  8. Prioritize consistency: Over time, consistent segmentation practices, combined with well-maintained TMs, significantly increase content reuse, reduce translation costs, and boost overall translation quality. 

  • Purpose: It makes translation faster, easier, and more consistent by allowing translators to focus on smaller, logical chunks of text. 
  • Tools: Segmentation is a foundational step in Computer-Assisted Translation (CAT) tools and is configured using specific segmentation rules. 

New kid on the block

 Try inter-linear.com

Enter text or upload a file, select a translation service, submit this source for translation and display to copy or download. The display, in contrast to side-by-side or parallel texts, can be segmented by sentences, paragraphs, word/phrase, or characters per line.

Additionally there are tools for converting source and translation into inter-linear displays as well as segmenting text into sentences.

Expect output suitable for everyday languages enthusiasts, not for scholarly or biblical exegesis.

Provide feedback on your experience with inter-linear displays and the results you got or wanted, or just enjoy the possibilities.

 

Thursday, October 2, 2025

Best strategy? for AI inter-linea display

 The best strategy for producing an interlinear display of a translation using Google Translate is to translate the complete submission first and then use client-side logic (JavaScript/CSS) for display conversion.


Analysis of Strategies

  1. Translate Complete Submission First (Recommended)

    • Process: Send the full sentence or segment to the Google Cloud Translation API. The API uses Neural Machine Translation (NMT), which translates the entire sentence at once to ensure contextual accuracy and capture nuances, idiomatic expressions, and complex grammar.1 The complete translated text is returned. Client-side code (JS/CSS) is then used to align the original text with the translated text for an interlinear (line-by-line or word-under-word) display, often by segmenting the text into smaller chunks (e.g., words, phrases) and attempting to map them, or simply displaying the source text and the full translation on separate lines.

    • Pros:

      • Highest Quality Translation: NMT produces the most human-like and contextually accurate result.2

      • Efficiency: Fewer API calls are made (one per sentence/segment), which is generally faster and more cost-effective.

    • Cons:

      • Alignment Challenge: The primary difficulty is that Google's NMT does not provide a direct word-to-word or phrase-to-phrase mapping in its standard API response. Perfect alignment for an interlinear display is often an extremely complex, if not impossible, computational challenge without the underlying translation model data.

  2. Translate Words and Phrases Iteratively (Not Recommended)

    • Process: Break the source text down into individual words or small phrases and make a separate API call for each one. Then, display the original word and its corresponding translation.

    • Pros:

      • Simple Alignment: Alignment is trivial since each call corresponds to a single word/phrase.

    • Cons:

      • Poor Translation Quality: Google's NMT is specifically designed to work on full sentences to understand context.3 Translating isolated words or short phrases loses context, resulting in poor, literal, and often incorrect translations.4

      • Inefficiency & Cost: This dramatically increases the number of API calls, leading to higher latency and significantly greater costs.


Conclusion

Since Google Translate's core strength is its contextual Neural Machine Translation, you should prioritize translation quality by sending the full segment. The trade-off is that you will need to implement a sophisticated (and likely imperfect) client-side logic to attempt to align the words for the interlinear display.

The other strategy involves losing the key benefit of Google Translate's technology.

To see how to get started with the Google Translate API, watch ... video: How to use the Google API to translate texts. This video is relevant because it demonstrates using the Google Translate API, which is the necessary first step for either strategy.


Saturday, September 20, 2025

Environment and AI solutions

Soubor:Google Gemini logo.svg 

20.09.25, ref. per Gemini

Question. Which is more friendly to the environment, for example in terms of electricity usage, carbon footprint, etc., client-side or server-side AI solutions? 

Response. In general, client-side AI solutions are more environmentally friendly than server-side solutions, but the answer is complex and depends on many factors. Client-side solutions, which run on a user's local device, eliminate or reduce the need for massive, energy-intensive data centers for inference (the use of a trained AI model). However, the training of these models, regardless of where they are ultimately deployed, remains a major environmental concern.

***

### Server-Side AI: The Environmental Burden

Server-side AI relies on large-scale data centers, which are one of the biggest contributors to the environmental footprint of AI. These data centers consume immense amounts of electricity for three main purposes:

* **Training and Inference:** The process of training a large language model (LLM) like GPT-4 can consume a staggering amount of energy. While a single query (inference) uses a tiny fraction of that energy, the sheer volume of billions of daily queries for popular models adds up to a massive cumulative carbon footprint.
* **Cooling:** Data centers generate a tremendous amount of heat. To prevent hardware from overheating, they require sophisticated and highly energy-intensive cooling systems, which can also consume millions of gallons of water annually.
* **Idle Capacity:** To ensure high availability and responsiveness, data centers must maintain a degree of idle capacity, which still consumes energy while waiting to handle traffic spikes.

The carbon footprint of server-side AI also depends on the **location** of the data center and its energy source. A data center running on a grid powered by renewable energy like wind or solar will have a significantly lower carbon footprint than one relying on fossil fuels.

### Client-Side AI: The Greener Alternative

Client-side, or on-device, AI runs directly on a user's device, such as a smartphone or laptop. Because the processing happens locally, it avoids the energy consumption and carbon emissions associated with data transmission and large data centers.

While client-side AI is more efficient for **inference**, it's crucial to understand a few key points:

* **Training is still a major factor:** The initial training of the AI model, whether it's for on-device or server-side use, is the most energy-intensive part of the process. This training is almost always done in a data center.
* **Limited Capabilities:** On-device models are typically smaller and less powerful than their server-side counterparts. This is because consumer devices lack the massive computational power and memory of a data center.
* **Device-level Consumption:** While they bypass the data center, on-device models still use a device's battery and CPU, which consumes energy. This can lead to faster battery drain and generate some heat.

### A Nuanced Comparison

Ultimately, the choice between client-side and server-side AI for a given application is a trade-off between performance and environmental impact. For computationally demanding tasks like training a large generative AI model, a server-side solution is currently the only option. For tasks that can be performed by smaller models, such as basic text completion or image classification, client-side AI is a much more energy-efficient and environmentally responsible choice.

As on-device hardware becomes more powerful and AI models are optimized to be smaller and more efficient, we'll likely see a shift toward more client-side solutions, reducing the reliance on energy-hungry data centers for many everyday AI applications.

Musing futures

Long-form interlinear translations are available online, that is ones not having to do with biblical or scriptural texts. Fiction mostly, but there is some non-fiction to be found. The interlinear examples, many published for online or physical book reading, show words/phrases aligned with the source's text. This is a first step toward getting closer and closer to contextual meaning, e.g., based on historical understanding of language items, hermeneutics, etymology of a word, ngram (frequency of word usage over time), etc.

Which of the current tools included in the set of experiments shows promise for use in creating long-form interlinear translations (of course, here we refer to the softened understanding of interlinear here termed inter-linea). For discussion purposes, see https://jkm.free.nf/inter-linea/w4word.html.

A product of the above word/phrase approach based on character count limitation per line is below, without entirely correct alignment and no glossing, but serviceable I believe. If so, perhaps a more rigorous(?) translation service might be employed, something like DeepL? Then the 
word/phrase approach (versus others on the same site) might function as a prototype for long-form translations . . . to publish?

In essence, Bohr and Heisenberg argued that science had finally caught up with the conceptual problems involved in the description of reality that philosophers had been warning of for centuries. Bohr is quoted as saying: "There is no quantum world. There is only an abstract quantum physical description. It is wrong to think that the task of physics is to find out how nature is. Physics concerns what we can say about nature." This vaguely positivist statement was echoed by Heisenberg:. "[W]e have to remember that what we observe is not nature in itself but nature exposed to our method of questioning." 

Their broadly anti-realist "Copenhagen interpretation" "denying" that the wave function represents the real physical state of a quantum system quickly became the dominant way of thinking about quantum mechanics. More recent variations of such anti-realist interpretations suggest that the wave function is simply a way of "coding" our experience, or our subjective beliefs derived from our experience of the physics, allowing us to use what we've learned in the past to predict the future.

The word/phrase display.

In essence, Bohr and Heisenberg argued that
Im Wesentlichen argumentierten Bohr und Heisenberg das

science had finally caught up with the conceptual
Die Wissenschaft hatte endlich das Konzept eingeholt

problems involved in the description of reality
Probleme, die an der Beschreibung der Realität verbunden sind

that philosophers had been warning of for
dass Philosophen vor gewarnt hatten

centuries. Bohr is quoted as saying: "There is no
Jahrhunderte. Bohr wird mit den Worten zitiert: "Es gibt keine

quantum world. There is only an abstract quantum
Quantenwelt. Es gibt nur ein abstraktes Quantum

physical description. It is wrong to think that
Physische Beschreibung. Es ist falsch zu denken, dass

the task of physics is to find out how nature is.
Die Aufgabe der Physik ist es, herauszufinden, wie die Natur ist.

Physics concerns what we can say about nature."
Die Physik betrifft das, was wir über die Natur sagen können. "

This vaguely positivist statement was echoed by
Diese vage positivistische Aussage wurde von wiederholt von

Heisenberg:. "[W]e have to remember that what we
Heisenberg:. "[Wir müssen uns daran erinnern, was wir

observe is not nature in itself but nature
beobachten ist nicht die Natur an sich, sondern die Natur

exposed to our method of questioning."
unserer Befragungsmethode ausgesetzt. "

Their broadly anti-realist "Copenhagen
Ihr weitläufig anti-realistischer "Kopenhagen"

interpretation" "denying" that the wave function
Interpretation "" leugnen ", dass die Wellenfunktion fungiert

represents the real physical state of a quantum
repräsentiert den wirklichen physischen Zustand eines Quanten

system quickly became the dominant way of
Das System wurde schnell zur dominierenden Art von

thinking about quantum mechanics. More recent
Denken Sie an die Quantenmechanik. Neuer

variations of such anti-realist interpretations
Variationen solcher antirrealistischer Interpretationen

suggest that the wave function is simply a way of
Schlagen Sie vor, dass die Wellenfunktion einfach eine Möglichkeit ist

"coding" our experience, or our subjective
"codieren" unsere Erfahrung oder unsere subjektive

beliefs derived from our experience of the
Überzeugungen abgeleitet aus unserer Erfahrung der

physics, allowing us to use what we've learned in
Physik, damit wir das verwenden können, was wir gelernt haben

the past to predict the future.
die Vergangenheit, die Zukunft vorherzusagen.

An enhancement to this display might be to show the target before the source, where one wanted to create an inter-linea for more than personal use. And one of the two might be in a smaller font or colored in some way to reduce distraction/dependence on one rather than the other language. 

Tuesday, September 16, 2025

Some interesting links

 [This post is open-ended. As links become known and are assessed relevant to the discussions here, they will be added. Got something to suggest?] 

https://eurolinguiste.com/interlinear-books-review-a-dual-language-book/

https://interlinearbooks.com/

https://www.alexanderarguelles.com/academy/

https://textanalysisonline.com/nltk-sentence-segmentation 

And here are some more with short notes.

https://readlang.com/
nifty online teaching/learning using interlinear concepts and web page technologies

https://interlinearbooks.com/tools/interlinearizer/
multi-step process on- and off-line, labor intensive

https://paratext.org
multi-step process within a software program using training corpora

https://github.com/gbroques/interlinearizer/blob/main/README.md
may be useful in building interlinear, aligned translations

https://www.lucalampariello.com/create-interlinear-texts/
Imtranslator enthusiast, recommended

https://interlinearbooks.com/tools/interlinearizer/splitter/
text splitter, useful

https://github.com/lukastk/interlinearize
relies on Calibre conversions, looks promising as local/server-side solution for longer works 

Hyrskykari, Aulikki & Majaranta, Päivi & Räihä, Kari-Jouko. (2005). From gaze control to attentive interfaces. Interactive applications that make use of eye tracking have traditionally been based on command-and-control. Applications that make more subtle use of eye gaze have recently become increasingly popular in the domain of attentive interfaces that adapt their behaviour based on the visual attention of the user. We provide a review of the main systems and application domains where this genre of interfaces has been used.

Wednesday, August 27, 2025

OpenAI API key

OpenAI

[Check the OpenAI website for the latest . . . this is an old post.]


Here's a step-by-step guide to obtain an OpenAI API key and considerations for using it in a web application:


---

### **Steps to Get an OpenAI API Key**

#### **1. Create an OpenAI Account**
1. Go to [OpenAI's website](https://openai.com/).
2. Click **Sign Up** to create an account, or **Log In** if you already have one.

#### **2. Access the API Section**
1. Once logged in, navigate to the OpenAI **API Dashboard** at [https://platform.openai.com/](https://platform.openai.com/).
2. Click on your profile picture in the top-right corner and select **API Keys** from the dropdown menu.

#### **3. Generate an API Key**
1. On the API Keys page, click the **Create new secret key** button.
2. A new API key will be generated. Copy the key immediately, as it will only be displayed once.
   - Save the key securely (e.g., in a password manager).

#### **4. Set Up Billing**
1. Go to the **Billing** section in your account settings.
2. Add a valid payment method to enable API usage.
   - Note: OpenAI offers a free trial for new users with a limited amount of credits. Check the specific details on their pricing page.

---

### **Considerations for Using the API Key**

#### **1. **Costs**
- **Pay-as-you-go model**: You will be charged based on usage.
- Refer to OpenAI’s [pricing page](https://openai.com/pricing) to understand the costs per token for different models (e.g., GPT-4, GPT-3.5).
- Monitor usage carefully to avoid unexpected charges by setting usage limits in the **Usage** section of your dashboard.

#### **2. **Rate Limits**
- OpenAI imposes rate limits on API usage. Check your account for specific limits and adjust your use accordingly.

#### **3. **Compliance**
- Review OpenAI's **Usage Policies** to ensure you comply with their guidelines, particularly when dealing with translations that may involve sensitive or copyrighted content.

---

### **Implementation in the Web App**

Users input their API key to perform translation requests. In doing so:

1. **Input Field for API Key**
   - You have a secure input field where you can enter your API key.
   - User's API keys are not saved on the server.

2. **Translation Request**
   -Translations and displays are not guaranteed one hundred percent accurate. 
   -Proofread results. Report any problems . . . so we (OpenAI and the web app provider) can improve.

Monday, August 25, 2025

Client-side web apps--try 'em

 FOR TESTING

Available as they develop.

https://jkm.free.nf/inter-linea

May be deactivated without notice.

Feedback appreciated.

final.copy@yahoo.com



Sunday, August 24, 2025

DIY sentence-by-sentence translation

You have your source text and you have the target translation. However, they are separate, as one would see them side-by-side from, let's say, Google Translate. How do you get a sentence-by-sentence display for closer reading and understanding of your source and its translation?

You can use this tool right here, or from whence it comes, jkm.free.nf/inter-linea/sidebyside.html.

A kind of DIY quick fix kindly made available free.

 

Interlinear Text Display

Enter your source text and translation above, then click "Generate Interlinear" to see the sentence-by-sentence display.

Saturday, August 23, 2025

Concept view, line-by-line alt-trans

This is a generic template to visualize or build an actual web (html) app. You can try it, but it won't produce a translation nor a display. A working version may be coming soon, here or elsewhere.

 

Interlinear Translator (Prototype)

Interlinear Translator (Client-Side Demo)

Word/char count?


For reference--English

Based on the average word length of 5 to 6 characters (including spaces and punctuation), 300,000 characters is approximately 50,000 to 60,000 words.