-
Notifications
You must be signed in to change notification settings - Fork 383
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vision model overview metrics #1688
Conversation
Codecov Report
@@ Coverage Diff @@
## main #1688 +/- ##
=======================================
Coverage 88.91% 88.91%
=======================================
Files 105 105
Lines 5559 5559
=======================================
Hits 4943 4943
Misses 616 616
Flags with carried forward coverage won't be shown. Click here to find out more. Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
ab7ebea
to
a64c90b
Compare
3 similar comments
7c0ed8e
to
0856a08
Compare
669f519
to
6a5651a
Compare
1 similar comment
@jamesbchao it looks like the tabular data UI tests are failing:
It looks like the PR is affecting the tabular data case for model overview metrics, and this is causing the tabular data ui tests to fail. |
libs/interpret-vision/src/lib/VisionExplanationDashboard/Controls/CohortToolBar.tsx
Outdated
Show resolved
Hide resolved
6a5651a
to
0a1f174
Compare
a7cbf6e
to
0761b03
Compare
0761b03
to
aff5aaf
Compare
Calculates Accuracy, Precision, Recall, and F1 scores for vision data in the Model Overview tab. Implements calculation of macro Precision, Recall, and F1 scores for multiclass classification scenarios. Compatible with other metadata, but dependent on Vision dashboard cohorts and Model assessment vision.
Update: micro metrics
Description
Checklist