Nonlinear Distance Estimation from a Single Camera

Vision is arguably the most widely used sensor for position and velocity estimation in animals, and it is increasingly used in robotic systems as well. Many animals use stereopsis and object recognition in order to make a true estimate of distance. For a tiny insect such as a fruit fly or honeybee, however, these methods fall short. Instead, an insect must rely on calculations of optic flow, which can provide a measure of the ratio of velocity to distance, but not either parameter independently. Nevertheless, flies and other insects are adept at landing on a variety of substrates, a behavior that inherently requires some form of distance estimation in order to trigger distance-appropriate motor actions such as deceleration or leg extension.

Previous studies have shown that landing behaviors are indeed under visual control, raising the question: how does an insect estimate distance solely using optic flow? In this paper we use a nonlinear control theoretic approach to propose a solution for this problem. Our algorithm takes advantage of visually controlled landing trajectories that have been observed in flies and honeybees. Finally, we implement our algorithm, which we term dynamic peering, using a camera mounted to a linear stage to demonstrate its real-world feasibility. Read more in BioInspiration and BioMimetics.

Movie (left): Real time performance of the dynamic peering estimation algorithm. The video shows the data from figures 4 and 5 as an animation. Bottom left: Camera image sequence showing the visual target and region of interest (red box). Bottom right: Optic flow as a function of camera pixel calculated using the current and previous frames using OpenCV’s Lucas Kanade algorithm. For the purposes of control, we calculated a linear fit of the data (red line) over the region of interest. Top row: Dynamic peering performance (red) compared with the ground-truth values (blue) for position, velocity, and optic flow estimates, as well as the applied control effort. After an initial period where the robot accelerates to the steady state optic flow rate of -0.1 1/s, the estimates lock on to the actual values.