Skip to content

Suggestion: clarify “left to right” wording for decoder-only models (RTF/RTL languages) #1153

@aztecx

Description

@aztecx

Hi, first of all thank you for this course as I am finding it very helpful.

In Chapter 1 (How 🤗 Transformers solve tasks), there is a sentence like:

"Decoder-only models (like GPT, Llama): These models process text from left to right and are particularly good at text generation tasks."

I understand that “left to right” here refers to the sequence order inside the model (predicting each token from the previous tokens in the sequence).

However, for learners whose first language uses a right-to-left script (Arabic, Persian, Hebrew, etc.), or for people very new to ML, this wording can be a bit confusing. It can sound like it is about the visual direction of the text (left vs right on the screen) instead of about token order in the model.

Would it make sense to rephrase this slightly to make it clearer for beginners and multilingual learners?

or add a small note that “left to right” means “in sequence order, from earlier tokens to later tokens,” independent of how the language is written visually.

This is a small wording detail, but I think clarifying it could help avoid confusion for learners whose native languages are right-to-left or who are reading this as their first exposure to these concepts.

Thanks again for the great course!
For context, my native language is Arabic and this was slightly confusing at first glance.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions