Skip to content

Latest commit

 

History

History
45 lines (33 loc) · 3.97 KB

talk.md

File metadata and controls

45 lines (33 loc) · 3.97 KB
layout title permalink
page
Interactive Scalable Interfaces for Machine Learning Interpretability
talk/

Abstract

Data-driven machine learning paradigms now solve some of the world's hardest problems by learning from data. Unfortunately, what is learned is often unknown to both the people who train models and the people they impact. My research addresses these challenges by enabling machine learning interpretability at scale and for everyone, through designing and developing interactive interfaces that help people confidently understand data-driven systems.

(1) Operationalizing interpretability: My Gamut and TeleGam systems operationalize interpretability through design probes that investigate the emerging practice of interpretability. Gamut has been deployed at Microsoft Research and demoed for executive leadership.

(2) Scaling up interpretability: My Summit system scales interpretability to large-scale neural networks and datasets, for example ImageNet with 1.3M+ images, by summarizing and visualizing what features a deep learning model has learned and how those features interact to make predictions.

(3) Communicating interpretability: Through the new medium of interactive articles, my work accelerates research dissemination and broadens people’s education access to modern AI technologies. With the Parametric Press, a new interactive publishing platform I co-launched, my work has helped over 250,000+ people learn about machine learning’s capabilities and modern applications.

My interdisciplinary research contributes to human-computer interaction, machine learning, and more importantly their intersection. Through collaborating closely with researchers, designers, and practitioners, my research is making an impact in academia, industry, government, and society, including open-source interactive systems, scalable algorithms, and widely accessible communication artifacts.

Slides

  • PDF: [Dropbox, low quality (50MB)][talk-low-db]
  • PDF: [Dropbox, high quality (200MB)][talk-high-db]
  • Movie export with animations + demo videos:

<iframe width="560" height="315" src="https://www.youtube.com/embed/UfJoqQGXIGc" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>

Materials

  • Research Statement (PDF): [fredhohman.com/research-statement.pdf][statement]
  • CV (Web): [fredhohman.com/cv][cv]
  • CV (PDF): [fredhohman.com/cv.pdf][cv-pdf]

Bio

Fred Hohman is a PhD candidate at Georgia Tech's College of Computing.

His research focuses on enabling machine learning interpretability at scale and for everyone, by designing and developing interactive interfaces to help people confidently understand data-driven systems. Besides building tools, he also creates data visualizations and writes interactive articles to simply communicate complex ideas.

He has collaborated with designers, developers, and scientists at Apple, Microsoft Research, NASA JPL Human Interfaces, and Pacific Northwest National Lab. He won a NASA PhD Space Technology Research Fellowship, a Microsoft AI for Earth Award for using AI to improve sustainability, and the President's Fellowship for top incoming PhD students. He has also won an ACM CHI 2019 Best Paper award; a KDD 2018 Audience Appreciation Award, Runner-up; an IEEE VIS VISxAI Best Paper, Honorable Mention; and a SIGMOD 2017 Best Demo, Honorable Mention. His work has appeared in popular press, such as the Stack Overflow Blog, Fast Company, and Data Stories. He co-organizes the Workshop on Visualization for AI Explainability (VISxAI) at IEEE VIS. He double majored in mathematics and physics.

[talk-low]: {{ site.url }}/talk-low-quality.pdf [talk-high]: {{ site.url }}/talk-high-quality.pdf [talk-low-db]: https://www.dropbox.com/s/q67wq3eedr88yey/talk-low-quality.pdf?dl=0 [talk-high-db]: https://www.dropbox.com/s/crmfc8gusn9rtt4/talk-high-quality.pdf?dl=0 [cv]: https://fredhohman.com/cv [cv-pdf]: https://fredhohman.com/cv.pdf [statement]: {{ site.url }}/research-statement.pdf