Learning Visual Forward Models to Compensate for Self-Induced Image Motion.

A. Ghadirzadeh, G.W. Kootstra, A. Maki, M. Björkman

Research output: Chapter in Book/Report/Conference proceedingConference paperAcademic

4 Citations (Scopus)

Abstract

Predicting the sensory consequences of an agent's own actions is considered an important skill for intelligent behavior. In terms of vision, so-called visual forward models can be applied to learn such predictions. This is no trivial task given the high-dimensionality of sensory data and complex action spaces. In this work, we propose to learn the visual consequences of changes in pan and tilt of a robotic head using a visual forward model based on Gaussian processes and SURF correspondences. This is done without any assumptions on the kinematics of the system or requirements on calibration. The proposed method is compared to an earlier work using accumulator-based correspondences and Radial Basis function networks. We also show the feasibility of the proposed method for detection of independent motion using a moving camera system. By comparing the predicted and actual captured images, image motion due to the robot's own actions and motion caused by moving external objects can be distinguished. Results show the proposed method to be preferable from the earlier method in terms of both prediction errors and ability to detect independent motion.
Original languageEnglish
Title of host publicationProceedings of the International Symposium on Robot and Human Interactive Communication (RO-MAN 2014)
Pages1110-1115
Publication statusPublished - 2014
EventInternational Symposium on Robot and Human Interactive Communication (RO-MAN 2014), Edinburg, UK -
Duration: 25 Aug 201429 Aug 2014

Conference

ConferenceInternational Symposium on Robot and Human Interactive Communication (RO-MAN 2014), Edinburg, UK
Period25/08/1429/08/14

Fingerprint

Dive into the research topics of 'Learning Visual Forward Models to Compensate for Self-Induced Image Motion.'. Together they form a unique fingerprint.

Cite this