High power fiber lasers can be incoherently combined to form the basis for high energy laser applications. Incoherent combining of fiber lasers has a number of advantages over other laser beam combining methods. However, the far-field beam quality of the incoherently combined laser array can still be significantly degraded by atmospheric optical turbulence. In this article, a general scaling law for propagation of incoherently combined laser array through atmosphere is developed by employing theoretical analysis and the common stochastic wave optics technique, and mainly focus on the effects of diffraction and atmospheric optical turbulence. The scaling law developed in the present work differs from standard scaling laws in its definition of irradiance. We show that the far-field irradiance and beam dispersion of any incoherently combined laser array, regardless of near-field beamlets geometry, can be obtained in terms of four basic parameters: laser power, field effective area, pupil field factor, and the Fried parameter.The results show that the formula is simple but predicts peak irradiance and beam dispersion accurately in the far field with varying levels of atmospheric turbulence, regardless of the near-field beamlets geometry.
The stereo vision is generally used to obtain the 3D information in traditional three-dimensional measurement. At least two cameras are calibrated in advance and then resection is performed to obtain the three-dimensional coordinates. It can be seen that obtaining 3D information needs at least two cameras (or two views) because single camera can only obtain 2D information. In addition, only the 3D spatial position when the image is captured by camera can be obtained. When we measure 3D information of miss distance of the weapon with high-velocity motion, such as missile, it is hard to capture the image when the weapon touches on the target because the limitation of camera fps (frames per second). Hence, we can only obtain the position of the moment before the weapon touches on the target and this would bring error for miss distance estimation. In this paper, a fast miss distance estimation method is proposed using shadow and single view (i.e., single camera). This proposed method only uses one camera and uses the characteristic that the intersection of the axes of weapon and its shadow is the actual image projection of the moment when the weapon touches on the target. The proposed method dose not need to capture the image of moment when the weapon touches on the target and hence not need high fps, then would extend the range of choice for camera. Experimental results indicate our proposed method has better performance in terms of accuracy, numerical stability and computational speed for miss distance estimation, compared with the traditional stereo vision.
KEYWORDS: 3D acquisition, Calibration, Cameras, 3D metrology, 3D image processing, Photogrammetry, Imaging systems, 3D vision, Covariance matrices, Stereoscopy
Photogrammetry with stereo vision is widely used in computer vision and SLAM (simultaneous localization and mapping), whose key steps are calibration and intersection measurement. Calibration is to obtain the intrinsic and extrinsic parameters, including the principal point, focal length and pose. Intersection measurement is to obtain the 3D information after calibration, including position, velocity and rotation. In some cases, such as visual monitoring cameras (VMCs), photogrammetry uses large field of view, and has the characteristics of long distance from camera to target and wide measuring range, which increase the difficulty of calibration and is unable to place 3D control points arbitrarily. What's more, the distance from the target area to 3D control point area has a great influence on the measuring accuracy of intersection measurement. In this paper, we proposed a new method to place 3D control points, including planar and non-planar scenes and this method can distinguish the two scenes. Then the planar and non-planar methods can be used to calibrate in different cases respectively. In addition, we analyzed the layout of 3D control points to obtain relation between the measuring accuracy and the distance from the target area to 3D control point area. Experimental results show the longer the distance, the greater the measuring error in synthetic data and real images, and to improve the measuring accuracy, the 3D control points should be planar or non-planar strictly, not quasi-planar.
For increasing speed of camera external calibration, ensuring the same size of two control point sets and increasing matching rate of camera images after projection transformation, a fast extraction method of control points is established and a point set matching method based on Delaunay triangulation is proposed. Experimental result shows the extraction method can extract all the control points correctly without spurious and lost extraction. In addition, we obtain simulation images with projection transformation according to camera imaging principle. Through simulation we can see the point set matching method can adapt to the projection transformation and improve matching rate in the limited angle range. Lastly, the experimental result shows the external calibration relative error based on our method is below 0.3% compared with manual external calibration.
The length change of some equipment or buildings has a marked impact on the performance. Therefore, the measurement of length is of great significance. Aiming at the problem of small measuring range with traditional binocular vision method, a measurement method of the length of long line based on epipolar constraint is established. Compared with traditional binocular vision method, this method has the advantages of small common field of view, large measuring range and can calculate virtual imaging point coordinates. In addition, according to the three-dimensional coordinates of two camera optical centers and two endpoints of the long line, we establish a matrix and get the relationship between virtual imaging position error and the rank of the matrix. The simulation result shows that the relative error of measurement is less than 0.2%.
Pointing precision of attitude stabilization platform is important for airship during its flight and it is one of the key factors to make sure that the antenna isolates disturbance from airship and tracks target with stable attitude. Therefore, in this paper, a method of real-timely measuring pointing precision of attitude stabilization platform based on image processing is established to test attitude stabilization platform performance. In addition, a method of real-time extraction for target with Harris algorithm based on local Otsu threshold segmentation is established for real-time automatic measurement. This method overcomes the problems of long shooting distance, small imaging area of target and the bad light. The experimental results show our method can real-time measure the pointing precision for attitude stabilization platform and the measured value fits well with the design value .
In range test, for measuring the target attitude in target terminal trajectory, we developed a fast Hough transform to real-timely extract axes of the axes symmetry target in high-speed image processing system. The theoretical analysis shows that the accuracy of this scheme is higher than that of standard Hough transform, and the closer to one the absolute value of slope is, the lower the error is. Experimental result shows that the computing speed for line detection of this scheme is higher than standard Hough transform, and the greater the noise is, the more obvious the advantage of speed is. In addition, the error distribution of the experiment is in agreement with the theoretical error distribution.
Due to the uncertainty of stratospheric airship's shape and the security problem caused by the uncertainty, surface reconstruction and surface deformation monitoring of airship was conducted based on laser scanning technology and a √3-subdivision scheme based on Shepard interpolation was developed. Then, comparison was conducted between our subdivision scheme and the original √3-subdivision scheme. The result shows our subdivision scheme could reduce the shrinkage of surface and the number of narrow triangles. In addition, our subdivision scheme could keep the sharp features. So, surface reconstruction and surface deformation monitoring of airship could be conducted precisely by our subdivision scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.