Contextual Semantic Interpretability

D. Marcos Gonzalez*, Ruth Fong, S. Lobry, Rémi Flamary, Nicolas Courty, D. Tuia

*Corresponding author for this work

Research output: Contribution to conferenceConference paperAcademicpeer-review


Convolutional neural networks (CNN) are known to learn an image representation that captures concepts relevant to the task, but do so in an implicit way that hampers model interpretability. However,one could argue that such a representation is hidden in the neurons and can be made explicit by teaching the model to recognize semantically interpretable attributes that are present in the scene. We call such an intermediate layer a semantic bottleneck. Once the attributes are learned,they can be re-combined to reach the final decision and provide both an accurate prediction and an explicit reasoning behind the CNN decision.In this paper, we look into semantic bottlenecks that capture context: we want attributes to be in groups of a few meaningful elements and participate jointly to the final decision. We use a two-layer semantic bottleneck that gathers attributes into interpretable, sparse groups, allowing them contribute differently to the final output depending on the context. We test our contextual semantic interpretable bottleneck (CSIB) on the task of landscape scenicness estimation and train the semantic interpretable bottleneck using an auxiliary database (SUN Attributes). Our model yields in predictions as accurate as a non-interpretable baseline when applied to a real-world test set of Flickr images, all while providing clea rand interpretable explanations for each prediction.
Original languageEnglish
Publication statusPublished - 2020
Event15th Asian Conference on Computer Vision - Virtual, Kyoto, Japan
Duration: 30 Nov 20204 Dec 2020


Conference15th Asian Conference on Computer Vision
Abbreviated titleACCV
Internet address

Fingerprint Dive into the research topics of 'Contextual Semantic Interpretability'. Together they form a unique fingerprint.

Cite this