Land-use characterisation using Google Street View pictures and OpenStreetMap

S. Srivastava, Sylvain Lobry, D. Tuia, John Vargas Munoz

Research output: Contribution to conferenceConference paper


This paper presents a study on the use of freely available, geo-referenced pictures from Google Street View to model and predict land-use at the urban-objects scale. This task is traditionally done manually and via photointerpretation, which is very time consuming. We propose to use a machine learning approach based on deep learning and to model land-use directly from both the pictures available from Google Street View and OpenStreetMap annotations. Because of the large availability of these two data sources, the proposed approach is scalable to cities around the globe and presents the possibility of frequent updates of the map. As base information, we use features extracted from single pictures around the object of interest; these features are issued from pre-trained convolutional neural networks. Then, we train various classifiers (Linear and RBF support vector machines, multi layer perceptron) and compare their performances. We report on a study over the city of Paris,France, where we observed that pictures coming from both inside and outside the urban-objects capture distinct, but complementary features.
Original languageEnglish
Number of pages5
Publication statusPublished - 2018
Event21st AGILE Conference on Geographic Information Science (2018) - Lund University, Lund, Sweden
Duration: 12 Jun 201815 Jun 2018


Conference21st AGILE Conference on Geographic Information Science (2018)

Fingerprint Dive into the research topics of 'Land-use characterisation using Google Street View pictures and OpenStreetMap'. Together they form a unique fingerprint.

Cite this