This paper presents a system to determine the photogrammetric parameters of a camera. The lens distortion, focal length and camera six degree of freedom (DOF) position are calculated. The system caters for cameras of different sensitivity spectra and fields of view without any mechanical modifications. The distortion characterization, a variant of Brown's classic plumb line method, allows many radial and tangential distortion coefficients and finds the optimal principal point. Typical values are 5 radial and 3 tangential coefficients. These parameters are determined stably and demonstrably produce superior results to low order models despite popular and prevalent misconceptions to the contrary. The system produces coefficients to model both the distorted to undistorted pixel coordinate transformation (e.g. for target designation) and the inverse transformation (e.g. for
image stitching and fusion) allowing deterministic rates far exceeding real time. The focal length is determined to minimise the error in absolute photogrammetric positional measurement for both multi camera systems or monocular (e.g. helmet tracker) systems. The system determines the 6 DOF position of the camera in a chosen coordinate system. It can also determine the 6 DOF offset of the camera relative to its mechanical mount. This allows faulty cameras to be replaced without requiring a recalibration of the entire system (such as an aircraft cockpit). Results from two simple applications of the calibration results are presented: stitching and fusion of the images from a dual-band visual/ LWIR camera array, and a simple laboratory optical helmet tracker.
This work extends earlier work on the real-time photogrammetric stitching of staring arrays of high resolution
videos on commercial off the shelf hardware. The blending is both further optimised for Graphics Processor
Unit (GPU) implementation and extended from one to two dimensions to allow for multiple layers or arbitrary
arrangements of cameras. The incorporation of stabilisation inputs allows the stitching algorithm to provide space
stabilised panoramas. The final contribution is to decrease the sensitivity to depth of the stitching procedure,
especially for wide aperture baselines. Finally timing tests and some resultant stitched panoramas are presented
and discussed.
This work aims to quantify and improve the sensitivity of monocular passive ranging techniques in sequences that
are degraded by the the quasi-periodic inter frame motion and localised blurring associated with atmospheric
scintillation. Ranges from tens of meters up to 1km in uncontrolled outdoor conditions were investigated. This
work has application in surveillance, border control, artillery, and the autonomous navigation of ground vehicles.
A field trial with long focal length, large aperture cameras was held. The targets were placed at known distances
and a scintillometer was used to measure the turbulence between the cameras and each target. A variety of
depth from defocus and depth from focus algorithms were applied to a the video sequences to determine range
to the targets. Increasing levels of atmospheric turbulence were observed ranging from no turbulence to mild,
moderate and severe turbulence. Algorithms for the mitigation of atmospheric turbulence were applied to the
recorded video sequences. The passive ranging techniques were then applied to the turbulence-mitigated video
sequences and the results compared to that of the raw sequences. Finally, findings regarding the best performing
combination of ranging and mitigation techniques to use under different scintillation conditions are presented.
Inverse lens distortion modelling allows one to find the pixel in a distorted image which corresponds to a known
point in object space, such as may be produced by RADAR. This paper extends recent work using neural networks
as a compromise between processing complexity, memory usage and accuracy. The already encouraging results
are further enhanced by considering different neuron activation functions, architectures, scaling methodologies
and training techniques.
Tracking targets in a panoramic image is in many senses the inverse problem of tracking targets with a narrow
field of view camera on a pan-tilt pedestal. In a narrow field of view camera tracking a moving target, the object is
constant and the background is changing. A panoramic camera is able to model the entire scene, or background,
and those areas it cannot model well are the potential targets and typically subtended far fewer pixels in the
panoramic view compared to the narrow field of view. The outputs of an outward staring array of calibrated
machine vision cameras are stitched into a single omnidirectional panorama and used to observe False Bay near
Simon's Town, South Africa. A ground truth data-set was created by geo-aligning the camera array and placing
a differential global position system receiver on a small target boat thus allowing its position in the array's field
of view to be determined. Common tracking techniques including level-sets, Kalman filters and particle filters
were implemented to run on the central processing unit of the tracking computer. Image enhancement techniques
including multi-scale tone mapping, interpolated local histogram equalisation and several sharpening techniques
were implemented on the graphics processing unit. An objective measurement of each tracking algorithm's
robustness in the presence of sea-glint, low contrast visibility and sea clutter - such as white caps is performed
on the raw recorded video data. These results are then compared to those obtained with the enhanced video
data.
This work aims to evaluate the improvement in the performance of tracking small maritime targets due to
real-time enhancement of the video streams from high zoom cameras on pan-tilt pedestal. Due to atmospheric
conditions these images can frequently have poor contrast, or exposure of the target if it is far and thus small in
the camera's field of view. A 300mm focal length lens and machine vision camera were mounted on a pan-tilt
unit and used to observe the False Bay near Simon's Town, South Africa. A ground truth data-set was created
by performing a least squares geo-alignment of the camera system and placing a differential global position
system receiver on a target boat, thus allowing the boat's position in the camera's field of view to be determined.
Common tracking techniques including level-sets, Kalman filters and particle filters were implemented to run on
the central processing unit of the tracking computer. Image enhancement techniques including multi-scale tone
mapping, interpolated local histogram equalisation and several sharpening techniques were implemented on the
graphics processing unit. This allowed the 1.3 mega-pixel 20 frames per second video stream to be processed
in real-time. A quantified measurement of each tracking algorithm's robustness in the presence of sea-glint, low
contrast visibility and sea clutter - such as white caps is performed on the raw recorded video data. These
results are then compared to those obtained using data enhanced with the algorithms described.
Most current lens distortion models use only a few terms of Brown's model, which assumes that the radial distortion
is dependant only on the distance from the distortion centre, and an additive tangential distortion can be
used to correct lens de-centering. This paper shows that the characterization of lens distortion can be improved
by over 79% compared to prevailing methods. This is achieved by using modern numerical optimization techniques
such as the Leapfrog algorithm, and sensitivity-normalized parameter scaling to reliably and repeatably
determine more terms for Brown's model. An additional novel feature introduced in this paper is to allow the
distortion to vary not only with polar distance but with the angle too. Two models for radially asymmetrical
distortion (i.e. distortion that is dependant on both polar angle and distance) are discussed, implemented and
contrasted to results obtained when no asymmetry is modelled. A sample of 32 cameras exhibiting extreme
barrel distortion (due to their 6.0mm focal lengths) is used to show that these new techniques can straighten
lines to within 7 hundredths of a pixel RMS over the entire image.
KEYWORDS: Distortion, Cameras, Data modeling, Optimization (mathematics), Modeling, Image compression, 3D metrology, 3D modeling, Motion models, Systems modeling
Inverse distortion is used to create an undistorted image from a distorted image. For each pixel in the undistorted
image it is required to determine which pixel in the distorted image should be used. However the process of
characterizing a lens using a model such as that of Brown, yields a non-invertible mapping from the distorted
domain to the undistorted domain. There are three current approaches to solving this: an approximation of the
inverse distortion is derived from a low-order version of Brown's model; an initial guess for the distorted position is
iteratively refined until it yields the desired undistorted pixel position; or a look-up table is generated to store the
mapping. Each approach requires one to sacrifice either accuracy, memory usage or processing time. This paper
shows that it is possible to have real-time, low memory, accurate inverse distortion correction. A novel method
based on the re-use of left-over distortion characterization data is combined with modern numerical optimization
techniques to fit a high-order version of Brown's model to characterize the inverse distortion. Experimental
results show that, for thirty-two 5mm lenses exhibiting extreme barrel distortion, inverse distortion can be
improved 25 fold to 0.013 pixels RMS over the image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.