experiment to visualize how well your Teachable Machines model generalizes to other data
- demo, is it a shiba inu or a maine coon cat?
- demo, cat or dog?
- glitch: https://glitch.com/edit/#!/agreeable-pisces
- github: https://github.com/kevinrobinson/agreeable-pisces
Can we make a way for people to try out the image classification models they make in Teachable Machines and visualize how they perform on real world data?
How do we model for young people that evaluating accuracy and involving others in fairness questions is a core part of making things with machine learning?
This project loads a trained model, then can get new data from disk or by searching online. It embeds Facets Dive to start visualizing the results.
If we could add in more attributes (eg, upload a CSV) or if Teachable Machines supported bundling other labeled attributes, we could visualize those here as well. But there may be some we can figure out automatically, in ways that are related to what Teachable Machines is doing too (eg, run plain MobileNet on each image and add those labels).
This would let us add more accessible tools for things like subgroup analysis, and would let more non-technical people visualize and understand the social aspects of fairness questions. Or add in counterfactuals or other parts of tools like the what-if tool (paper), especially with generating counterfactual or adversarial images. Or maybe pull out automated "prototypes" or "criticisms" (eg, Kim et al).
The core idea is to make these tools accessible to young people learning about AI, as a way to demonstrate how this is how you do the work; you can't build AI without it.
(the way Facets Dive is included here only works on Chrome)
shiba inus & maine coon cats, using the Oxford Pets dataset from kaggle
- 2019072393705shibamaincoon10
- 20190723110514shibamainecoon200
- 20190724113746messinotmessi