Skip to main content
  • Research Article
  • Open access
  • Published:

Mobile Robot Visual Navigation Using Multiple Features

Abstract

We propose a method to segment the ground plane from a mobile robot's visual field of view and then measure the height of nonground plane features above the mobile robot's ground plane. Thus a mobile robot can determine what it can drive over, what it can drive under, and what it needs to manoeuvre around. In addition to obstacle avoidance, this data could also be used for localisation and map building. All of this is possible from an uncalibrated camera (raw pixel coordinates only), but is restricted to (near) pure translation motion of the camera. The main contributions are (i) a novel reciprocal-polar (RP) image rectification, (ii) ground plane segmentation by sinusoidal model fitting in RP-space, (iii) a novel projective construction for measuring affine height, and (iv) an algorithm that can make use of a variety of visual features and therefore operate in a wide variety of visual environments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nick Pears.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Pears, N., Liang, B. & Chen, Z. Mobile Robot Visual Navigation Using Multiple Features. EURASIP J. Adv. Signal Process. 2005, 191358 (2005). https://doi.org/10.1155/ASP.2005.2250

Download citation

  • Received:

  • Revised:

  • Published:

  • DOI: https://doi.org/10.1155/ASP.2005.2250

Keywords and phrases: