U.S. Army Research, Development, and Engineering Command (RDECOM) Communications Electronics Research, Development and Engineering Center (CERDEC) Night Vision and Electronic Sensors Directorate (NVESD) has performed early work to develop a Distributed Aperture System (DAS). The DAS aims at improving the situational awareness of armored fighting vehicle crews under closed-hatch conditions. The concept is based on a plurality of sensors configured to create a day and night dome of surveillance coupled with heads up displays slaved to the operator's head to give a "glass turret" feel. State-of-the-art image processing is used to produce multiple seamless hemispherical views simultaneously available to the vehicle commander, crew members and dismounting infantry. On-the-move automatic cueing of multiple moving/pop-up low silhouette threats is also done with the possibility to save/revisit/share past events. As a first step in this development program, a contract was awarded to United Defense to further develop the Eagle VisionTM system. The second-generation prototype features two camera heads, each comprising four high-resolution (2048x1536) color sensors, and each covering a field of view of 270°hx150°v. High-bandwidth digital links interface the camera heads with a field programmable gate array (FPGA) based custom processor developed by Sarnoff Corporation. The processor computes the hemispherical stitch and warp functions required for real-time, low latency, immersive viewing (360°hx120°v, 30° down) and generates up to six simultaneous extended graphics array (XGA) video outputs for independent display either on a helmet-mounted display (with associated head tracking device) or a flat panel display (and joystick). The prototype is currently in its last stage of development and will be integrated on a vehicle for user evaluation and testing. Near-term improvements include the replacement of the color camera heads with a pixel-level fused combination of uncooled long wave infrared (LWIR) and low light level intensified imagery. It is believed that the DAS will significantly increase situational awareness by providing the users with a day and night, wide area coverage, immersive visualization capability.
HARLIDTM is a digital approach to achieving angular sensitivity in a laser warning system. In this version of the HARLIDTM module, a number of improvements are described which correct for certain problems and limitations of earlier devices. The detector used is a 2-detector assembly, consisting of matching silicon and InGaAs arrays assembled in a sandwich configuration, to achieve spectral sensitivity between 500 and 1700 nm. Systematic angular readout errors observed in previous work have been avoided with the use of a new light-guide in which the optical channels are air instead of glass. Improved response time in the short wavelength end of the spectral range has been achieved with the use of thinner active regions in the elements of the silicon array, and a redesigned digital aperture mask significantly improves accuracy and reduces optical vignetting effects. The design and performance characteristics of a 6-bit HARLIDTM are presented.
KEYWORDS: Sensors, Line width roughness, Curium, Head, Defense and security, Laser countermeasures, Signal detection, Prototyping, Human-machine interfaces, Process control
The Defence Research Establishment Valcartier has initiated in 1998 R&D work to investigate and to demonstrate key technologies required for future Defensive Aid Suite to protect Light Armoured Vehicles. A basic Defensive Aid Suite demonstrator (Phase I) was built and integrated into the LAV vetronics by Litton Systems Canada and his consortium. The Defensive Aid Suite consisted of a 2-band HARLIDTM-based laser detection head, a processor capable to control and deploy countermeasures and a DAS touch-screen display all integrated in a Light Armored Vehicle. The crew was able to select the operation mode for direct fire or smoke deployment by pushing one of the pair of buttons available at the bottom of the display. This system was successfully demonstrated in October 1999 during an international trial. This article gives an overview of the results obtained in the field as well as some of the lessons learnt. It also describes laboratory and field measurements made on the Laser Warning Receiver unit itself. The results of the DAS tactical use and of Human factor evaluation will illustrate its performance within typical laser threat scenarios. This work will serve as the basis for the recommendation of a future DAS demonstrator (Phase II) integrating more sensors and countermeasures.
This article describes work that has been undertaken at the Defence Research Establishment Valcartier
(DREV) to integrate a number of electro-optics sensors into a modular mission configurable threat
detection sensor suite (TDSS) demonstrator. The sensor suite is based on a series of plug and play
detection heads networked together in the same fashion as a computer network. The architecture allows
optimization of the detection capabilities according to a mission requirement. The TDSS demonstrator
was developed to study different sensor configuration in order to establish the requirements to improve
the protection of the military platforms. It is a good example showing how networking can help in
adapting military systems to specific requirements. The paper gives an up to date description of the
TDSS demonstrator. To our knowledge, it is the first time that this approach is used in the field of
military detection sensors.
KEYWORDS: Sensors, Head, Signal detection, Silicon, Analog electronics, Waveguides, Detector arrays, Line width roughness, Data acquisition, Photomasks
This article describes work that has been undertaken to develop a suite demonstrator for the evaluation of the digital High Angular Resolution Laser Irradiance Detector (HARLID). HARLID is a module designed to estimate the angel of arrival of a laser beam. It response to radiation ranging from 400 to 1700 nm and covers a 96 degree field of view (FOV) with a resolution of +/- 1 degree in a plane. The main goal of the suite demonstrator is to characterize new generations of HARLIDs and to provide means to evaluate new configurations in order to establish the requirements for the protection of military platforms with laser warning receivers (LWR). The suite demonstrator is a good example of how an Ethernet network can be used to share information between various senors over a specialized network. Moreover, it allows building and studying a plurality of configurations without regard to the separation and the structure between the detection units. To our knowledge, it is the first time that such as approach is used in the field of LWR.
A new concept of surveillance system called Wide Area Coverage Infrared Surveillance System (WACISS), based on the human vision, was developed and a first laboratory prototype was demonstrated recently. A second prototype, more operational, is named the Infrared Eye is being built and will be tested in cooperation with the NRCC Flight Research Laboratory. The Infrared Eye will use the new pixel-less quantum well infrared photodetector sensors, coupled to light emitting diodes (QWIP/LED), currently being developed at NRCC Institute for Microstructural Science under DREV sponsorship. The multiple advantages of the pixel-less QWIP/LED over conventional sensors will considerably simplify the design of the system. As the WACISS, the IR Eye will integrate two cameras: the first, with a wide field-of- view, will be used for detection while the second camera, with a narrower field with higher resolution for identification, will be mobile within the WFOV and slaved to the operator's line-of-sight by means of an eye-tracking system. The images from both cameras will be fused and shown simultaneously on a standard high resolution CRT display unit, interfaced with the eye-tracking unit. The basic concepts pertaining to the project and the design constraints of this second prototype are presented.
Current infrared imaging systems used for surveillance and search and rescue operations possess two fields of view which may be alternately selected by the operator: a wide field of the order of 20 degrees is used for the search and detection of targets, and a narrower field of a few degrees is selected for the recognition tasks. However, the degraded sensitivity and resolution of the wider field prevents it from fulfilling its function adequately. A new concept based on the focal plane array detector technology is intended to correct this drawback and to improve future infrared surveillance system for search and rescue operations. Simulating the properties of the human eye, the concept allows the simultaneous surveillance and image acquisition in two fields of view. A wide peripheral field of view (60 degrees) with increased sensitivity but lower resolution is dedicated to search and detection. A narrower field (6 degrees), which can be steered within the wider field, allows the recognition of detected objects with an improved resolution obtained by the use of microscanning techniques. THe high resolution required for the simultaneous display of both fields of view has led to the development of a new type of display, based on optical projection and superposition, better adapted to the human eye and hence optimizing the human interface. The constraints on the opto-mechanical and electronic design imposed by the mobility of the narrower field within the larger one, the microscanning mechanism and the calibration requirements of the focal plane array are discussed, and the selected solutions are presented. The limitations of the system in its present state of development are exposed and the plans for future improvements are elaborated.
Microscanning is a technique that allows to double the resolution of a given staring array imager. It consists in taking multiple images of the same scene while displacing each time the image over the detector plane by a distance equal to a fraction of the detector pitch. The technique is limited by the time required to shift the image from one point to the other and by the precision of the movements. This article describes work that was done under contract for the Defense Research Establishment Valcartier as part of the Wide Area Coverage Infrared Search System (WACISS) project to develop a fast microscanning imaging device. The system includes three main sections: the microscanning head, the controller and the power amplifier. THe microscanning head is made of a lens and a two-axis microtranslation table driven by two piezoelectric translators. The controller drives a high voltage power amplifier which in turn drives the translator. The controller allows four operation modes: fixed position, 2 X 2, 3 X 3, and 4 X 4 microscan. It works in open as well as in closed loop for precise displacements. The systems will be integrated to the WACISS project and will serve as an aid for the identification of detected objects.
This paper describes the results of experiments that were conducted in order to characterize the types of noise limiting the performance of an amber InSb charge injection device focal plane array (3-5 microns) of 256 by 256 pixels. This is part of the work done at the Defense Research Establishment Valcartier to develop a wide-area-coverage infrared surveillance system. The emphasis is put on the analysis of the postcorrection spatial noise that reduces the array sensitivity to weak point-source targets. This residual noise limits the improvement provided by an increased array integration time. Furthermore, the results show that a temporal low frequency noise component has a more severe effect than detector nonlinearities. However, this problem can be partly resolved with a periodic offset compensation obtained by reference image subtraction. The reference image is acquired when the blade of a flat black chopper wheel completely blocks the aperture of the camera. The chopper wheel is synchronized on the acquisition process. Results show that this compensation method can efficiently reduce the low frequency noise level and enhance point-source target detection.
Focal plane arrays allowed tremendous improvement in the robustness and compactness of thermal imagers reducing both mechanical and optical requirements. However, these will always be limited by the pixel size, the fill factor, and by the sampling theorem. As compared to older one-detector scanning systems, focal plane arrays can only reproduce half the frequencies scanning systems do for a given instantaneous field of view. To overcome this limitation, microscanning seems to be a winning approach. Microscanning can be seen as an oversampling process. A series of images representing the same scene are taken while displacing each time the image over the array by a fraction of the detector pitch. The oversampled image is built by interlacing all the pixels from all the images in both directions. It can be shown that microscanning can bring the resolution to the same level it is with standard scanning system. Furthermore, by characterizing the process, one can compensate for it and bring the resolution to the level of a microdisplacement. This article describes work that has been undertaken at the Defense Research Establishment Valcartier to evaluate the requirements for the microscanning process and to determine gains that can be obtained by using that technique in a surveillance application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.