-
-
Notifications
You must be signed in to change notification settings - Fork 610
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Propose accuracy functions #2181
base: master
Are you sure you want to change the base?
Conversation
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #2181 +/- ##
==========================================
- Coverage 86.02% 83.15% -2.88%
==========================================
Files 19 20 +1
Lines 1460 1460
==========================================
- Hits 1256 1214 -42
- Misses 204 246 +42 ☔ View full report in Codecov by Sentry. |
I'm not sure we want to add accuracy metrics in Flux, but if we do they should take the signature |
I think we should add them somewhere officially supported, since we've seen a lot of people end up rolling their own less-than-optimal versions. The question is where? Do we have the capacity to revive https://github.com/JuliaML/MLMetrics.jl? |
We should think more about the right signatures. Matching the categorical functions in
If the first class are expanded to take labels instead (like #2141) then
If all loss functions accept the model as 1st argument (#2090) then perhaps you want Matching the input of
Those seem the obvious reference points if this lives within Flux. If it lives in OneHotArrays, then perhaps the |
I use If https://github.com/JuliaML/MLMetrics.jl is enough, applying it in docs / tutorials is also a good choice, which can also help new users reduce work in testing model. |
PR Checklist
About #2171 | accuracy function
I simply define 3 accuracy function for multi-class and multi-label problem.