Deep multi-task learning for a geographically-regularized semantic segmentation of aerial images

Michele Volpi*, Devis Tuia

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

40 Citations (Scopus)


When approaching the semantic segmentation of overhead imagery in the decimeter spatial resolution range, successful strategies usually combine powerful methods to learn the visual appearance of the semantic classes (e.g. convolutional neural networks) with strategies for spatial regularization (e.g. graphical models such as conditional random fields). In this paper, we propose a method to learn evidence in the form of semantic class likelihoods, semantic boundaries across classes and shallow-to-deep visual features, each one modeled by a multi-task convolutional neural network architecture. We combine this bottom-up information with top-down spatial regularization encoded by a conditional random field model optimizing the label space across a hierarchy of segments with constraints related to structural, spatial and data-dependent pairwise relationships between regions.Our results show that such strategy provide better regularization than a series of strong baselines reflecting state-of-the-art technologies. The proposed strategy offers a flexible and principled framework to include several sources of visual and structural information, while allowing for different degrees of spatial regularization accounting for priors about the expected output structures.
Original languageEnglish
Pages (from-to)48-60
JournalISPRS Journal of Photogrammetry and Remote Sensing
Publication statusPublished - Oct 2018


Dive into the research topics of 'Deep multi-task learning for a geographically-regularized semantic segmentation of aerial images'. Together they form a unique fingerprint.

Cite this