Episode detection in videos captured using a head-mounted camera

Aneesh Chauhan, Sameer Singh*, Dave Grosvenor

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

2 Citations (Scopus)

Abstract

With the advent of wearable computing, personal imaging, photojournalism and personal video diaries, the need for automated archiving of the videos captured by them has become quite pressing. The principal device used to capture the human-environment interaction with these devices is a wearable camera (usually a head-mounted camera). The videos obtained from such a camera are raw and unedited versions of the visual interaction of the wearer (the user of the camera) with the surroundings. The focus of our research is to develop post-processing techniques that can automatically abstract videos based on episode detection. An episode is defined as a part of the video that was captured when the user was interested in an external event and paid attention to record it. Our research is based on the assumption that head movements have distinguishable patterns during an episode occurrence and these patterns can be exploited to differentiate between an episode and a non-episode. Here we present a novel algorithm exploiting the head and body behaviour for detecting the episodes. The algorithm's performance is measured by comparing the ground truth (user-declared episodes) with the detected episodes. The experiments show the high degree of success we achieved with our proposed method on several hours of head-mounted video captured in varying locations.

Original languageEnglish
Pages (from-to)176-189
Number of pages14
JournalPattern Analysis and Applications
Volume7
Issue number2
DOIs
Publication statusPublished - 19 Jun 2004
Externally publishedYes

Keywords

  • Dominant motion
  • Episode detection
  • Head-mounted video

Fingerprint Dive into the research topics of 'Episode detection in videos captured using a head-mounted camera'. Together they form a unique fingerprint.

  • Cite this