PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Autonomous Feature And Star Tracking (AFAST), essential technology for building autonomous spacecraft that explore solar system bodies, is described. The architecture and processing requirements of the systems compromising AFAST are presented for probable mission scenarios. The focus is on celestial-scene interpretation, the implications of that interpretation for AFAST systems, and an AFAST technology status.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The shape of asteroids can at best be described as irregular. However, for certain target opportunities, it is often that a complete characterization the shape may not be necessary for the purpose of mosaicking. In case of slow spinning objects, a simple rectangular bounding is sufficient. Eigenvectors of the scatter matrix from the boundary points of an object can be used to determine the orientation of the bounding rectangle. These eigenvectors correspond physically to the directions about which the 2D projection of the object has maximum and minimum moments of inertia. An optimal mosaic size can then be determined from the aspect ratio of the bounding rectangle, and the size of the rectangle can be used to assist us in determining the starting mosaicking time. In a simple asteroid flyby scenario which the spacecraft travels in a linear trajectory with constant speed, the apparent size of the asteroid can be parameterized in a closed form. The parameter estimation can be solved by a least- squares fit using the size information derived from images taken when the angular diameter of the asteroid is less than the camera's field of view.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Developing image acquisition requirements is a major part of the Autonomous Feature and Star Tracking (AFAST) project's work in advancing sensor technologies for planetary exploration. Sensing and detecting desired planetary features also involve topographic and photometric characterization of celestial bodies. The photometric study can provide felicitous designs of optics, focal plane array, and processing pertaining to feature extraction and tracking. In this paper, we present the results based on our preliminary study and implementation of photometric modeling on the AFAST's SGI-based 3D graphics testbed. Specifically, a computer-generated topographical model of Phobos is used since the photometry of Phobos have been studied in great details. The photometric function derived is mapped onto the computer-generated topographic model. Using the photometric model and specific camera parameters, an accurate CCD pixel response to target irradiance can then be calculated. This pixel information can be used by the feature tracking algorithm, and sensitivity to signal-to-noise ratio can then be determined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A particular aspect of autonomous target tracking for planetary bodies, detection of `distinct' features appearing in the scene, is addressed. Using the gray level cooccurrence (GLC) matrices, object detection is done by applying measures (metrics) defined on GLC matrices to different regions on the image plane. The decision logic for detecting `unique' features can then be implemented using a supervised or unsupervised parametric method on a set of GLC- metric measurements. Detailed algorithms and test results based on Voyager image data are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A feature detection and mapping system is described that is designed to support planetary flyby operations. The features identified by the system are local maximums of a difference image generated by convolving sensor images with a bank of Gabor filters at different orientations and scales. The feature points found by such a method are fairly stable under different viewing conditions when tested using actual planetary images from previous flybys. A feature map constructed from these points is used to model the object's surface and to provide error estimates of the sensor's attitude with respect to the object. Flyby simulations using the feature detection method and guidance control information derived from the generated map perform competently on both spherical and elongated objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is increasing emphasis on onboard autonomy in the design of future spacecraft. Image- based, closed-loop tracking and pointing, developed as part of the Autonomous Feature And Star Tracking project at the Jet Propulsion Laboratory, has emerged as one of the technology areas essential to realizing autonomous spacecraft. In this paper, we present an overview of our ongoing efforts to develop intelligent, onboard processing technology that will make it possible to realize such spacecraft. A mission scenario, a planetary small-body flyby, is used to illustrate the autonomous tracking/pointing technology addressed in the research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A star tracker for a mission to the planet Pluto is described. Low mass, low power and high levels of performance are required at the lowest possible cost. These goals are achieved for the FY94 baseline Pluto mission by a modified, commercially available star tracker, the HD- 1003, from Hughes Danbury Optical Systems, The Pluto mission, the spacecraft design, the star tracker and its performance are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Clementine mission provided the first ever complete, systematic surface mapping of the moon from the ultra-violet to the near-infrared region. More than 1.7 million images of the moon, earth and space were returned from this mission. Two star tracker stellar compasses (star tracker camera + stellar compass software) were included on the spacecraft, serving a primary function of providing angle updates to the guidance and navigation system. These cameras served as a secondary function by providing a wide field of view imaging capability for lunar horizon glow and other dark-side imaging data. This 290 g camera using a 576 X 384 FPA and a 17 mm entrance pupil, detected and centroided stars as dim and dimmer than 4.5 mv, providing rms pointing accuracy of better than 100 (mu) rad pitch and yaw and 450 (mu) rad roll. A description of this light-weight, low power star tracker camera along with a summary of lessons learned is presented. Design goals and preliminary on-orbit performance estimates are addressed in terms of meeting the mission's primary objective for flight qualifying the sensors for future Department of Defense flights.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Ball Aerospace Systems Division CT-631 Star Tracker is a miniaturized version of the previous CT-601/621 units; volume, power and weight have been reduced. The fundamental algorithm for driving a CCD and processing its output are software driven. This architecture lends itself to other applications, one of which is a Star Scanner. The CT-632 Star Scanner is a CT-631 Star Tracker, modified to operate on a platform spinning at rates at least two orders of magnitude greater than nominal tracker rates. Time delay integration is used to accumulate star image signal as it moves across the CCD. The image is processed by standard star tracker algorithms to determine star centroids and intensities. Output data, continually updated as stars leave the field of view, consists of X and Y angles and latency time for the sixteen brightest stars, and diagnostic data. Commands range from entire reprogram ability to simple mode commands. Sensitivity ranges from 0 MI to 3.8 MI.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Designers of remote sensing spacecraft, including platforms for Mission to Planet Earth and Small Satellites, are driven to provide spacecraft designs that maximize data return and minimize hardware and operating costs. The attitude determination subsystems of these spacecraft must likewise provide maximum capability and versatility at an affordable price. Hughes Danbury Optical Systems has developed the Model HD-1003 Miniature Star Tracker which combines high accuracy, high reliability, and growth margin for `all-stellar' capability in a compact, modular design that meets these spacecraft needs and whose cost is competitive with horizon sensors and digital fine sun sensors. Begun in 1991, our HD-1003 development program has now entered the hardware qualification phase. This paper acquaints spacecraft designers with the design and performance capabilities of the HD-1003 tracker. We highlight the tracker's unique features which include: very small size (191 cubic inches, less lightshade); low weight (less than 8 lb); narrow and wide field-of-view lens options; multi-star tracking (6 stars simultaneously); 6 arc-sec (rms) accuracy; and growth margin for `all-stellar' attitude reference.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
New developments in image sensors and optical materials have opened the door to dramatic mass and power reductions in celestial tracker design. The rapid development of active pixel sensors (APS) has provided a new detector choice offering high on-chip integration of support circuitry at reduced power consumption. Silicon-carbide optics are one of the new developments in low-mass optical components. We describe the celestial tracker needs of an Autonomous Feature and Star Tracking (AFAST) system designed for autonomous spacecraft control. Details of a low-mass celestial tracker based on a low-power APS array and optimized for an AFAST system are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent advances in the application of panoramic imaging technology have resulted in the design of a single sensor attitude determination system, known as the panoramic annular lens attitude determination system (PALADS). The primary component of the attitude sensor is the panoramic annular lens (PAL), which provides a full hemispherical field of view surrounding the spacecraft. This allows for the simultaneous detection of multiple attitude reference sources, such as the earth and the sun or moon. The position of each reference source in the image plane translates into a viewing angle between the sensor's optical axis and the reference source. The data points associated with the reference sources are extracted from the image plane using digital image processing techniques. The X, Y, Z positional data of the spacecraft must be provided as inputs to PALADS. The extraction of two or more reference points from the image plane leads to the successful determination of three-axis spacecraft attitude.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Video Guidance Sensor (VGS) has been developed by NASA's Marshall Space Flight Center (MSFC) to provide the capability for a spacecraft to find and track a target vehicle and determine the relative positions and attitudes between the sensor and the target. The sensor uses laser diodes to illuminate the target, a CCD-based camera to sense the target, and a frame-grabber and processor to convert the video information into relative range, azimuth, elevation, roll, pitch, and yaw. The sensor was first built in 1988 and used in successful automated docking experiments using the air-bearing spacecraft simulator in MSFC's Flight Robotics Laboratory. Since then, many changes and improvements have been made, based on the results of testing. In addition to the use of this system for space vehicles, it has been adapted for commercial application. The current design is being built as a prototype to prepare for flight testing on the Space Shuttle. Some of the changes from the original system were designed to improve the noise rejection of the system. Other changes were made to improve the overall range of operation of the system, and still other changes improved the bandwidth of the system. The current VGS is designed to operate from 110 meters down to 0.5 meters and output the relative position and attitude data at 5 Hz. The system will be able to operate under any orbital lighting conditions from full solar illumination to complete darkness. The VGS is also designed to be used with more than one target and sensor to allow for redundant configurations. This new prototype should be completed and undergoing open- and closed-loop testing after March 1995.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The solid-state Hemispherical Resonator Gyroscope (HRG) uniquely offers the benefits of small size, extremely long life, and ultrahigh reliability for spacecraft pointing and control application. The HRG also offers precision performance which can be scaled through control loop and electronics design tradeoffs. Delco Systems Operations has developed the HRG-based Space Inertial Reference Unit (SIRU), which has been designed to maximize the benefits of small size and high reliability and meet the medium grade performance of most spacecraft. The precision performance required for many imaging spacecraft has been achieved through selected modifications to the SIRU design. The result is a precision pointing IRU which maintains the inherent small-size, high reliability characteristics offered by the solid-state HRG. These attributes enable the weight reduction and life extension of precision imaging spacecraft. Modifications to the SIRU design for precision pointing applications have been implemented and characterized. These modifications and test results are described herein.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Star identification algorithms are difficult to develop, require much parameter adjustment, and are hard to evaluate for different sensor configurations or noise environments. We describe a software platform, star_X that was developed for the design and testing of star identification algorithms. It provides a user-friendly window environment, standardized evaluation routines, and a flexible sensor specification format to assist researchers in developing and evaluating star identification algorithms. We describe the software package and compare some existing star identification algorithms that are currently supported.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Integrated Global Positioning System (GPS)/Inertial Navigation System (INS) is a cost effective way of providing an accurate and reliable navigation system for civil and military aviation. These systems also provide low cost solutions to mid-course navigation and guidance of medium and long range weapon systems. In this paper an error model is developed which can be used for GPS/INS filter mechanization. It is known that the model has a linear and a non-linear part. The latter consist of a quadratic function of system states and may be approximated by a noise term thereby allowing the use of the well known Kalman Filter (KF) design technique. KF algorithm suitable for this application is also developed, and computer simulation results for a typical aerospace application are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, parallel distributed processing has provided a new paradigm for algorithms, such as in missile guidance, which requires a high degree of computational efficiency as well as reliability and smaller size hardware. A problem of particular interest to the guidance literature is the closed-loop optical solutions that can be achieved on-board the missile. Furthermore, a desirable guidance scheme should be robust to low signal-to-noise conditions that generally arise in long-range applications. In this paper we shall present a neural network- based guidance scheme which provides a real-time optimal control on-board the missile with the inclusion of noise in the LOS angular rate data. The neural network is trained in an off-line session using optimal solutions obtained from an optimal control software resulting in a real- time closed-loop guidance method. The performance of the proposed scheme is then evaluated for different levels of SNR of the Line-Of-Sight (LOS) angular rate in a tail-chase engagement. In doing so, similar tests were conducted for the currently used closed-loop proportional navigation method and the potentially available technique of iterative optimal open-loop control with and without the presence of noise in the LOS angular rate. Although we did not include the noise in the missile/target dynamical model, the results indicate that the neural network-based scheme shows more robustness to low signal-to-noise situations as compared with traditional proportional navigation methods. This superiority is due, among other things, to the elimination of some of the restrictive, and in many cases unrealistic assumptions made in the derivation of most current guidance laws in use such as, for instance, unbounded control, simplified dynamics and/or aerodynamics, and non-maneuvering targets, to name a few.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The automated rendezvous and docking of spacecraft requires knowledge of the relative 6 degrees of freedom (6 DOF) between the chase and target spacecraft. A sensor system for estimation of the 6 DOF state is being developed at the Marshall Space Flight Center for the Automated Rendezvous and Capture Project. This sensor employs the use of a charge coupled device camera mounted on the chase spacecraft to image a reflective target located on the target spacecraft. The target is illuminated using an array of laser diodes and the resulting camera image is processed to produce an estimate of the relative 6 DOF state. This paper will contain a brief description of the sensor system, known as the Video Guidance Sensor, and will describe the algorithms necessary to determine the 6 DOF state from the camera image. The solution begins by determining the range to each target reflector. This information is used to determine spacecraft position and to derive vectors used for attitude determination. Two methods of attitude determination using vector measurements are described for a 3 spot and 5 spot target.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.