Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to speed up the loss computation in DTSH #31

Open
ssqiao opened this issue Aug 2, 2022 · 1 comment
Open

How to speed up the loss computation in DTSH #31

ssqiao opened this issue Aug 2, 2022 · 1 comment

Comments

@ssqiao
Copy link

ssqiao commented Aug 2, 2022

Hi, swuxyj. Nice work for this community.
It is noted that the training loss of DTSH contains a for loop which is somewhat time-consuming. Is there any change to speed up this op? It seems that the for loop can be parallelized.

@swuxyj
Copy link
Owner

swuxyj commented Aug 29, 2022

Hi, swuxyj. Nice work for this community. It is noted that the training loss of DTSH contains a for loop which is somewhat time-consuming. Is there any change to speed up this op? It seems that the for loop can be parallelized.

可以参考其他人的实现,目前这个版本已经是我能想到的最优的方式了

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants