Skip to content

Does the Current Implementation Break Full Attention When block_length < gen_length? #9

@ashun989

Description

@ashun989

Thank you for your excellent work. I would like to confirm my understanding of the current design:

According to my understanding, the current library seems to be suitable only for dLLMs that natively support block attention. For dLLMs using full attention (e.g. LLaDA), if block_length < gen_length, then during the generation of the current block, the model cannot access information from the subsequent blocks in the prompt. In this case, does the attention mechanism no longer satisfy the requirements of full attention?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions