Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Supervised Contrastive Loss #518

Open
wants to merge 10 commits into
base: staging
Choose a base branch
from

Conversation

waleeattia
Copy link

Reference issue

#426

Type of change

Implementing supervised contrastive loss
Adding plotting script to compare accuracies and transfer efficiencies

What does this implement/fix?

Implementing contrastive loss explicitly learns the progressive learning network transformer by penalizing samples of different classes that are close to one another. The new script enables two dnn algorithms to be compared by plotting the difference between their accuracies and transfer efficiencies. The accuracy of the supervised contrastive loss version improves by 6 percent compared to the PL network with categorical cross entropy.

Additional information

NDD 2021

@codecov
Copy link

codecov bot commented Dec 9, 2021

Codecov Report

Merging #518 (43b05f7) into staging (634d4d1) will not change coverage.
The diff coverage is n/a.

Impacted file tree graph

@@           Coverage Diff            @@
##           staging     #518   +/-   ##
========================================
  Coverage    90.09%   90.09%           
========================================
  Files            7        7           
  Lines          404      404           
========================================
  Hits           364      364           
  Misses          40       40           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 634d4d1...43b05f7. Read the comment docs.

@PSSF23 PSSF23 requested a review from jdey4 December 9, 2021 03:50
@jdey4
Copy link
Collaborator

jdey4 commented Dec 10, 2021

@rflperry Does this PR help your query about contrastive loss?

@rflperry
Copy link
Member

Yeah seems like it matches my results here that the transfer ability goes down, which I find interesting but the reason why I'm still a bit intrigued by. Not really worth adding if just always worse? I forget why I had multiple different results with different labels.

@rflperry
Copy link
Member

rflperry commented Dec 11, 2021

My takeaways/summary:

  • Since the decider is k-Nearest Neighbors, we want the learned (penultimate) representation to place samples of the same class close together.
  • Contrastive loss learns representations that are close together, and this is validated as we see higher accuracy from our kNN classifier. Softmax worked, but wasn't explicitly tuned to learn what we wanted (see embedding results for various losses here). In a way, the best loss would be a function of the network and decider together.
  • One slightly odd thing is that the difference in accuracy is non-monotonic (i.e. goes down then up). Maybe just a result of not running enough simulations?
  • Despite the accuracy going up, the transfer efficiencies are slightly worse. I'm a bit fuzzy on the details of the transfer efficiency metric, but potentially the learned embeddings are not good for OOD performance (this has been observed in various learned embedding algorithms like tSNE I believe)

Copy link
Member

@PSSF23 PSSF23 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@waleeattia Put figure in correct folder & save in pdf format.

@waleeattia
Copy link
Author

@PSSF23 fixed!

@waleeattia waleeattia requested a review from PSSF23 December 13, 2021 17:14
Copy link
Member

@PSSF23 PSSF23 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove commented code & unnecessary prints. After these LGTM.

@waleeattia
Copy link
Author

@PSSF23 Perfect, just made those changes. Thank you!

@waleeattia waleeattia requested a review from PSSF23 December 20, 2021 03:32
Copy link
Member

@PSSF23 PSSF23 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still some commented code remaining in benchmarks/cifar_exp/plot_compare_two_algos.py @waleeattia

@waleeattia
Copy link
Author

@PSSF23 Sorry I missed that, it should be good now.

@waleeattia waleeattia requested a review from PSSF23 December 20, 2021 03:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants