Improving AI models’ ability to explain their predictions | MIT News

In high-stakes settings such as medical diagnostics, users often want to know what prompted a computer vision model to make a particular prediction, so they can decide whether or not to trust its output. Conceptual bottleneck modeling is one way…















