Skip to content

How to try with FP8 for LLM finetuning? #2600

@kailashg26

Description

@kailashg26

What would be the steps to use FP8 from TE (transformer engine) in TorchTune to support instead of BF16 for full finetuning and LoRA?

Metadata

Metadata

Assignees

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions