Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix --use-cpu-initialization error when expert is not tensor-parallel #413

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

taozhiwei
Copy link

@taozhiwei taozhiwei commented Jul 3, 2024

Use --use-cpu-initialization will be failed When non-expert is tensor-parallel and expert is not tensor-parallel.
because per_partition_size is equal to master_weight.shape[partition_dim], so my_weight_list len is 0 except in 0 rank , so we can not use torch.cat, We should use assign.
Please help review @GuanhuaWang @tjruwase, thanks.

Signed-off-by: taozhiwei <taozhiweigis@163.com>
@GuanhuaWang
Copy link
Member

Hi @taozhiwei ,

just curious, what is the case that expert not TP but other layers are TP? given experts usually have more aggregated weights compared with other parts.

@taozhiwei
Copy link
Author

taozhiwei commented Jul 12, 2024

Hi @taozhiwei ,

just curious, what is the case that expert not TP but other layers are TP? given experts usually have more aggregated weights compared with other parts.

when not set the parameter --enable-expert-tensor-parallelism, that expert is not TP. for example, ds_pretrain_gpt_125M_MoE64.sh is not set the parameter, When adding the parameter --use-cpu-initialization directly, an error will be reported.When I need to compare whether the convergence curves are completely consistent, I will add --use-cpu-initialization @GuanhuaWang

@GuanhuaWang
Copy link
Member

GuanhuaWang commented Jul 12, 2024

Hi @taozhiwei ,
just curious, what is the case that expert not TP but other layers are TP? given experts usually have more aggregated weights compared with other parts.

when not set the parameter --enable-expert-tensor-parallelism, that expert is not TP. for example, ds_pretrain_gpt_125M_MoE64.sh is not set the parameter, When adding the parameter --use-cpu-initialization directly, an error will be reported.When I need to compare whether the convergence curves are completely consistent, I will add --use-cpu-initialization @GuanhuaWang

Hi @taozhiwei , I think I should rephrase my question since I am not asking configurations: What are the application scenarios for expert not using TP but rest using TP (i.e. expert not TP but non-expert TP)? To me, there is no such application given experts usually much larger than non-expert, thus if TP is enabled, it will always apply on expert first.

To me, for TP enabled cases, there are only two:

  1. expert TP, non-expert not TP
  2. both expert and non-expert TP

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants