Predicting the sensory consequences of an agent's own actions is considered an important skill for intelligent behavior. In terms of vision, so-called visual forward models can be applied to learn such predictions. This is no trivial task given the high-dimensionality of sensory data and complex action spaces. In this work, we propose to learn the visual consequences of changes in pan and tilt of a robotic head using a visual forward model based on Gaussian processes and SURF correspondences. This is done without any assumptions on the kinematics of the system or requirements on calibration. The proposed method is compared to an earlier work using accumulator-based correspondences and Radial Basis function networks. We also show the feasibility of the proposed method for detection of independent motion using a moving camera system. By comparing the predicted and actual captured images, image motion due to the robot's own actions and motion caused by moving external objects can be distinguished. Results show the proposed method to be preferable from the earlier method in terms of both prediction errors and ability to detect independent motion.
|Title of host publication||Proceedings of the International Symposium on Robot and Human Interactive Communication (RO-MAN 2014)|
|Publication status||Published - 2014|
|Event||International Symposium on Robot and Human Interactive Communication (RO-MAN 2014), Edinburg, UK - |
Duration: 25 Aug 2014 → 29 Aug 2014
|Conference||International Symposium on Robot and Human Interactive Communication (RO-MAN 2014), Edinburg, UK|
|Period||25/08/14 → 29/08/14|
Ghadirzadeh, A., Kootstra, G. W., Maki, A., & Björkman, M. (2014). Learning Visual Forward Models to Compensate for Self-Induced Image Motion. In Proceedings of the International Symposium on Robot and Human Interactive Communication (RO-MAN 2014) (pp. 1110-1115)