Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Improvement] accelerate T5 model conversion and fix bloom model on multi-process #447

Merged
merged 3 commits into from
Feb 13, 2023

Conversation

lanking520
Copy link
Contributor

@lanking520 lanking520 commented Feb 11, 2023

For example,

Flan-T5-xxl (49GB) conversion time reduced from 11 min to 4 min on a 64 core CPU with 4 concurrent process

If you do:

-i bigscience/bloomz-3b -o /tmp/ft_model3/ -tp 1 -p 4 -dt fp32

With Bloom conversion script. You will reproduce the bus error. This PR include a fix to address that

@lanking520
Copy link
Contributor Author

@byshiue

@lanking520 lanking520 changed the title [Improvement] accelerate T5 model conversion on large models [Improvement] accelerate T5 model conversion and fix bloom model on multi-process Feb 11, 2023
def convert_and_save_parameter(config: PretrainedConfig,
name: str,
param: torch.nn.Parameter,
def convert_and_save_parameter(param_name: str,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You change the API, but don't update the line 333.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed that in the following commit. But maybe we can just remove if else statement and use star_async. 1 process can still work under this condition.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would assume everyone even the laptop now should have 4 cores, so 1 core condition not common

@byshiue byshiue merged commit 9b6d718 into NVIDIA:main Feb 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants