Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

blockformer #1504

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open

blockformer #1504

wants to merge 7 commits into from

Conversation

LeonWlw
Copy link

@LeonWlw LeonWlw commented Oct 19, 2022

This PR is about implementation of blockformer in WeNet.
(Original paper: https://arxiv.org/abs/2207.11697)

  • Implementation Details
    • add se layer ensemble conformer encoder outputs
    • add se layer ensemble transformer decoder outputs
    • using relative positional encoding in decoder

In main branch, extracting features by torchaudio make a little worse results than the paper. I will push a branch using kaldi features for aishell recipe which can reproduce results in the paper.

@robin1001
Copy link
Collaborator

I think it's better if we add the experiment results on AIShell-1 and LibriSpeech, to show that we can get consistent and solid gain by using the model.


def forward(self, x: torch.Tensor) -> torch.Tensor:
b, c, _, _ = x.size()
y = self.avg_pool(x).view(b, c)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

avg_pool over T and D dim should consider pad_mask ?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for your remind, we will update pad_mask to the code and retrain it .

@LeonWlw
Copy link
Author

LeonWlw commented Oct 19, 2022

I think it's better if we add the experiment results on AIShell-1 and LibriSpeech, to show that we can get consistent and solid gain by using the model.

@robin1001 results of aishell has been added

@903859154
Copy link

hi, i run blockformer in 3080 and it just used 30% - 40% gpu. I change batchsize bigger and numworker but it didn't work. So what should i do to take more use of gpu?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants