Hosted by
Sean Welleck
Assistant Professor CMU
Support the podcast:
Buy me a coffee
Become a Patron!
Episode 14 | Been Kim
Interactive and Interpretable Machine Learning Models for Human Machine Collaboration
Papers and Links:
Been's Homepage
Thesis
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Sanity Checks for Saliency Maps
The (un) reliability of saliency methods
Listen:
The Thesis Review
ยท
[42] Charles Sutton - Efficient Training Methods for Conditional Random Fields