Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## staging #518 +/- ##
========================================
Coverage 90.09% 90.09%
========================================
Files 7 7
Lines 404 404
========================================
Hits 364 364
Misses 40 40 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
@rflperry Does this PR help your query about contrastive loss? |
|
Yeah seems like it matches my results here that the transfer ability goes down, which I find interesting but the reason why I'm still a bit intrigued by. Not really worth adding if just always worse? I forget why I had multiple different results with different labels. |
|
My takeaways/summary:
|
PSSF23
left a comment
There was a problem hiding this comment.
@waleeattia Put figure in correct folder & save in pdf format.
|
@PSSF23 fixed! |
PSSF23
left a comment
There was a problem hiding this comment.
Remove commented code & unnecessary prints. After these LGTM.
|
@PSSF23 Perfect, just made those changes. Thank you! |
There was a problem hiding this comment.
Still some commented code remaining in benchmarks/cifar_exp/plot_compare_two_algos.py @waleeattia
|
@PSSF23 Sorry I missed that, it should be good now. |
Reference issue
#426
Type of change
Implementing supervised contrastive loss
Adding plotting script to compare accuracies and transfer efficiencies
What does this implement/fix?
Implementing contrastive loss explicitly learns the progressive learning network transformer by penalizing samples of different classes that are close to one another. The new script enables two dnn algorithms to be compared by plotting the difference between their accuracies and transfer efficiencies. The accuracy of the supervised contrastive loss version improves by 6 percent compared to the PL network with categorical cross entropy.
Additional information
NDD 2021