Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmarking KD #105

Open
16 tasks
avishreekh opened this issue May 7, 2021 · 0 comments
Open
16 tasks

Benchmarking KD #105

avishreekh opened this issue May 7, 2021 · 0 comments
Labels
good first issue Good for newcomers help wanted Extra attention is needed Priority: High

Comments

@avishreekh
Copy link
Collaborator

avishreekh commented May 7, 2021

We need to benchmark the following algorithms on three datasets (MNIST, CIFAR10, CIFAR100). This is so that we are sure that our implementations are fairly accurate on most datasets.

We also need to ensure that the distillation works with a variety of student networks. @Het-Shah has suggested that we report results on ResNet18, MobileNet v2 and ShuffleNet v2 as student networks. ResNet50 can be the teacher network for all the distillations.

  • VanillaKD
  • TAKD
  • Noisy Teacher
  • Attention
  • BANN
  • Bert2lstm
  • RCO
  • Messy Collab
  • Soft Random
  • CSKD
  • DML
  • Self-training
  • Virtual Teacher
  • RKD Loss
  • KA/ProbShift
  • KA/LabelSmoothReg

If you wish to work on any of the above algorithms, just mention the algorithms in the discussion.

@avishreekh avishreekh added good first issue Good for newcomers help wanted Extra attention is needed Priority: High labels May 7, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers help wanted Extra attention is needed Priority: High
Projects
None yet
Development

No branches or pull requests

1 participant