A deep learning framework for matching of SAR and optical imagery

Lloyd Haydn Hughes, Diego Marcos, Sylvain Lobry, Devis Tuia, Michael Schmitt

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

SAR and optical imagery provide highly complementary information about observed scenes. A combined use of these two modalities is thus desirable in many data fusion scenarios. However, any data fusion task requires measurements to be accurately aligned. While for both data sources images are usually provided in a georeferenced manner, the geo-localization of optical images is often inaccurate due to propagation of angular measurement errors. Many methods for the matching of homologous image regions exist for both SAR and optical imagery, however, these methods are unsuitable for SAR-optical image matching due to significant geometric and radiometric differences between the two modalities. In this paper, we present a three-step framework for sparse image matching of SAR and optical imagery, whereby each step is encoded by a deep neural network. We first predict regions in each image which are deemed most suitable for matching. A correspondence heatmap is then generated through a multi-scale, feature-space cross-correlation operator. Finally, outliers are removed by classifying the correspondence surface as a positive or negative match. Our experiments show that the proposed approach provides a substantial improvement over previous methods for SAR-optical image matching and can be used to register even large-scale scenes. This opens up the possibility of using both types of data jointly, for example for the improvement of the geo-localization of optical satellite imagery or multi-sensor stereogrammetry.
Original languageEnglish
Pages (from-to)166-179
JournalISPRS Journal of Photogrammetry and Remote Sensing
Volume169
DOIs
Publication statusPublished - 1 Nov 2020

Fingerprint Dive into the research topics of 'A deep learning framework for matching of SAR and optical imagery'. Together they form a unique fingerprint.

Cite this