Through their ability to safely collect video and imagery from remote and potentially dangerous locations, UAVs have already transformed the battlespace. The effectiveness of this information can be greatly enhanced through synthetic vision. Given knowledge of the extrinsic and intrinsic parameters of the camera, synthetic vision superimposes spatially-registered computer graphics over the video feed from the UAV. This technique can be used to show many types of data such as landmarks, air corridors, and the locations of friendly and enemy forces. However, the effectiveness of a synthetic vision system strongly depends on the accuracy of the registration - if the graphics are poorly aligned with the real world they can be confusing, annoying, and even misleading.
In this paper, we describe an adaptive approach to synthetic vision that modifies the way in which information is displayed depending upon the registration error. We describe an integrated software architecture that has two main components. The first component automatically calculates registration error based on information about the uncertainty in the camera parameters. The second component uses this information to modify, aggregate, and label annotations to make their interpretation as clear as possible. We demonstrate the use of this approach on some sample datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.