Land-use characterisation using Google Street View pictures and OpenStreetMap

S. Srivastava, Sylvain Lobry, D. Tuia, John Vargas Munoz

Research output: Contribution to conferencePaperAcademic

Abstract

This paper presents a study on the use of freely available, geo-referenced pictures from Google Street View to model and predict land-use at the urban-objects scale. This task is traditionally done manually and via photointerpretation, which is very time consuming. We propose to use a machine learning approach based on deep learning and to model land-use directly from both the pictures available from Google Street View and OpenStreetMap annotations. Because of the large availability of these two data sources, the proposed approach is scalable to cities around the globe and presents the possibility of frequent updates of the map. As base information, we use features extracted from single pictures around the object of interest; these features are issued from pre-trained convolutional neural networks. Then, we train various classifiers (Linear and RBF support vector machines, multi layer perceptron) and compare their performances. We report on a study over the city of Paris,France, where we observed that pictures coming from both inside and outside the urban-objects capture distinct, but complementary features.

Conference

Conference21st AGILE Conference on Geographic Information Science (2018)
CountrySweden
CityLund
Period12/06/1815/06/18

Fingerprint

Land use
Photointerpretation
Information use
Multilayer neural networks
Support vector machines
Learning systems
Classifiers
Availability
Neural networks
Deep learning

Cite this

Srivastava, S., Lobry, S., Tuia, D., & Vargas Munoz, J. (2018). Land-use characterisation using Google Street View pictures and OpenStreetMap. Paper presented at 21st AGILE Conference on Geographic Information Science (2018), Lund, Sweden.
Srivastava, S. ; Lobry, Sylvain ; Tuia, D. ; Vargas Munoz, John. / Land-use characterisation using Google Street View pictures and OpenStreetMap. Paper presented at 21st AGILE Conference on Geographic Information Science (2018), Lund, Sweden.5 p.
@conference{b52b6474eed24e0398d09f7a545318a9,
title = "Land-use characterisation using Google Street View pictures and OpenStreetMap",
abstract = "This paper presents a study on the use of freely available, geo-referenced pictures from Google Street View to model and predict land-use at the urban-objects scale. This task is traditionally done manually and via photointerpretation, which is very time consuming. We propose to use a machine learning approach based on deep learning and to model land-use directly from both the pictures available from Google Street View and OpenStreetMap annotations. Because of the large availability of these two data sources, the proposed approach is scalable to cities around the globe and presents the possibility of frequent updates of the map. As base information, we use features extracted from single pictures around the object of interest; these features are issued from pre-trained convolutional neural networks. Then, we train various classifiers (Linear and RBF support vector machines, multi layer perceptron) and compare their performances. We report on a study over the city of Paris,France, where we observed that pictures coming from both inside and outside the urban-objects capture distinct, but complementary features.",
author = "S. Srivastava and Sylvain Lobry and D. Tuia and {Vargas Munoz}, John",
year = "2018",
language = "English",
note = "21st AGILE Conference on Geographic Information Science (2018) ; Conference date: 12-06-2018 Through 15-06-2018",

}

Srivastava, S, Lobry, S, Tuia, D & Vargas Munoz, J 2018, 'Land-use characterisation using Google Street View pictures and OpenStreetMap' Paper presented at 21st AGILE Conference on Geographic Information Science (2018), Lund, Sweden, 12/06/18 - 15/06/18, .

Land-use characterisation using Google Street View pictures and OpenStreetMap. / Srivastava, S.; Lobry, Sylvain; Tuia, D.; Vargas Munoz, John.

2018. Paper presented at 21st AGILE Conference on Geographic Information Science (2018), Lund, Sweden.

Research output: Contribution to conferencePaperAcademic

TY - CONF

T1 - Land-use characterisation using Google Street View pictures and OpenStreetMap

AU - Srivastava, S.

AU - Lobry, Sylvain

AU - Tuia, D.

AU - Vargas Munoz, John

PY - 2018

Y1 - 2018

N2 - This paper presents a study on the use of freely available, geo-referenced pictures from Google Street View to model and predict land-use at the urban-objects scale. This task is traditionally done manually and via photointerpretation, which is very time consuming. We propose to use a machine learning approach based on deep learning and to model land-use directly from both the pictures available from Google Street View and OpenStreetMap annotations. Because of the large availability of these two data sources, the proposed approach is scalable to cities around the globe and presents the possibility of frequent updates of the map. As base information, we use features extracted from single pictures around the object of interest; these features are issued from pre-trained convolutional neural networks. Then, we train various classifiers (Linear and RBF support vector machines, multi layer perceptron) and compare their performances. We report on a study over the city of Paris,France, where we observed that pictures coming from both inside and outside the urban-objects capture distinct, but complementary features.

AB - This paper presents a study on the use of freely available, geo-referenced pictures from Google Street View to model and predict land-use at the urban-objects scale. This task is traditionally done manually and via photointerpretation, which is very time consuming. We propose to use a machine learning approach based on deep learning and to model land-use directly from both the pictures available from Google Street View and OpenStreetMap annotations. Because of the large availability of these two data sources, the proposed approach is scalable to cities around the globe and presents the possibility of frequent updates of the map. As base information, we use features extracted from single pictures around the object of interest; these features are issued from pre-trained convolutional neural networks. Then, we train various classifiers (Linear and RBF support vector machines, multi layer perceptron) and compare their performances. We report on a study over the city of Paris,France, where we observed that pictures coming from both inside and outside the urban-objects capture distinct, but complementary features.

M3 - Paper

ER -

Srivastava S, Lobry S, Tuia D, Vargas Munoz J. Land-use characterisation using Google Street View pictures and OpenStreetMap. 2018. Paper presented at 21st AGILE Conference on Geographic Information Science (2018), Lund, Sweden.