Skip to main content
  • Research Article
  • Published:

Joint Audio-Visual Tracking Using Particle Filters

Abstract

It is often advantageous to track objects in a scene using multimodal information when such information is available. We use audio as a complementary modality to video data, which, in comparison to vision, can provide faster localization over a wider field of view. We present a particle-filter based tracking framework for performing multimodal sensor fusion for tracking people in a videoconferencing environment using multiple cameras and multiple microphone arrays. One advantage of our proposed tracker is its ability to seamlessly handle temporary absence of some measurements (e.g., camera occlusion or silence). Another advantage is the possibility of self-calibration of the joint system to compensate for imprecision in the knowledge of array or camera parameters by treating them as containing an unknown statistical component that can be determined using the particle filter framework during tracking. We implement the algorithm in the context of a videoconferencing and meeting recording system. The system also performs high-level semantic analysis of the scene by keeping participant tracks, recognizing turn-taking events and recording an annotated transcript of the meeting. Experimental results are presented. Our system operates in real-time and is shown to be robust and reliable.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dmitry N. Zotkin.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Zotkin, D.N., Duraiswami, R. & Davis, L.S. Joint Audio-Visual Tracking Using Particle Filters. EURASIP J. Adv. Signal Process. 2002, 162620 (2002). https://doi.org/10.1155/S1110865702206058

Download citation

  • Received:

  • Revised:

  • Published:

  • DOI: https://doi.org/10.1155/S1110865702206058

Keywords