This paper presents a study on the use of freely available, geo-referenced pictures from Google Street View to model and predict land-use at the urban-objects scale. This task is traditionally done manually and via photointerpretation, which is very time consuming. We propose to use a machine learning approach based on deep learning and to model land-use directly from both the pictures available from Google Street View and OpenStreetMap annotations. Because of the large availability of these two data sources, the proposed approach is scalable to cities around the globe and presents the possibility of frequent updates of the map. As base information, we use features extracted from single pictures around the object of interest; these features are issued from pre-trained convolutional neural networks. Then, we train various classifiers (Linear and RBF support vector machines, multi layer perceptron) and compare their performances. We report on a study over the city of Paris,France, where we observed that pictures coming from both inside and outside the urban-objects capture distinct, but complementary features.
|Number of pages||5|
|Publication status||Published - 2018|
|Event||21st AGILE Conference on Geographic Information Science (2018) - Lund University, Lund, Sweden|
Duration: 12 Jun 2018 → 15 Jun 2018
|Conference||21st AGILE Conference on Geographic Information Science (2018)|
|Period||12/06/18 → 15/06/18|