A human-assisted approach for a mobile robot to learn 3D object models using active vision

Matthijs Zwinderman*, Paul E. Rybski, Gert Kootstra

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference paperAcademicpeer-review

5 Citations (Scopus)

Abstract

In this paper we present an algorithm that allows a human to naturally and easily teach a mobile robot how to recognize objects in its environment. The human selects the object by pointing at it using a laser pointer. The robot recognizes the laser reflections with its cameras and uses this data to generate an initial 2D segmentation of the object. The 3D position of SURF feature points are extracted from the designated area using stereo vision. As the robot moves around the object, new views of the object are obtained from which feature points are extracted. These features are filtered using active vision. The complete object representation consists of feature points registered with 3D pose data. We describe the method and show that it works well by performing experiments on real world data collected with our robot. We use an extensive dataset of 21 objects, differing in size, shape and texture.

Original languageEnglish
Title of host publication19th International Symposium in Robot and Human Interactive Communication, RO-MAN 2010
PublisherIEEE
Pages397-403
Number of pages7
ISBN (Print)9781424479917
DOIs
Publication statusPublished - 13 Dec 2010
Externally publishedYes
Event19th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2010 - Viareggio, Italy
Duration: 12 Sept 201015 Sept 2010

Publication series

NameProceedings - IEEE International Workshop on Robot and Human Interactive Communication

Conference

Conference19th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2010
Country/TerritoryItaly
CityViareggio
Period12/09/1015/09/10

Fingerprint

Dive into the research topics of 'A human-assisted approach for a mobile robot to learn 3D object models using active vision'. Together they form a unique fingerprint.

Cite this