This paper discusses the integration of Inertial measurements with measurements from a three-dimensional (3D)
imaging sensor for position and attitude determination of unmanned aerial vehicles (UAV) and autonomous ground
vehicles (AGV) in urban or indoor environments. To enable operation of UAVs and AGVs at any time in any
environment a Precision Navigation, Attitude, and Time (PNAT) capability is required that is robust and not solely
dependent on the Global Positioning System (GPS). In urban and indoor environments a GPS position capability may
not only be unavailable due to shadowing, significant signal attenuation or multipath, but also due to intentional denial
or deception. Although deep integration of GPS and Inertial Measurement Unit (IMU) data may prove to be a viable
solution an alternative method is being discussed in this paper.
The alternative solution is based on 3D imaging sensor technologies such as Flash Ladar (Laser Radar). Flash Ladar
technology consists of a modulated laser emitter coupled with a focal plane array detector and the required optics. Like
a conventional camera this sensor creates an "image" of the environment, but producing a 2D image where each pixel
has associated intensity vales the flash Ladar generates an image where each pixel has an associated range and intensity
value. Integration of flash Ladar with the attitude from the IMU allows creation of a 3-D scene. Current low-cost Flash
Ladar technology is capable of greater than 100 x 100 pixel resolution with 5 mm depth resolution at a 30 Hz frame
rate.
The proposed algorithm first converts the 3D imaging sensor measurements to a point cloud of the 3D, next, significant
environmental features such as planar features (walls), line features or point features (corners) are extracted and
associated from one 3D imaging sensor frame to the next. Finally, characteristics of these features such as the normal or
direction vectors are used to compute the platform position and attitude changes. These "delta" position and attitudes
are then used calibrate the IMU. Note, that the IMU is not only required to form the point cloud of the environment
expressed in the navigation frame, but also to perform association of the features from one flash Ladar frame to the
next.
This paper will discuss the performance of the proposed 3D imaging sensor feature extraction, position change
estimator and attitude change estimator using both simulator data and data collected from a moving platform in an
indoor environment. The former consists of data from a simulated IMU and flash Ladar installed on an aerial vehicle for
various trajectories through an urban environment. The latter consists of measurements from a CSEM Swissranger 3D
imaging sensor and a MicroStrain low-cost IMU. Data was collected on a manually operated aerial vehicle inside the
Ohio University School of Electrical Engineering and Computer Science building.
|