PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The SKIPSM paradigm gives very fast execution of binary morphology operations with large arbitrary SEs. Hardware- based applications using lookup tables to implement the FSMs have been in use for almost a decade. More recently, software-based applications have benefited form comparable speed increases. This paper provides speed comparisons between software implementations using lookup tables and those using direct implementations of the FSMs, for a range of SE sizes and shapes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The SKIPSM paradigm offers fast execution of a very wide range of binary, grey-scale, 3D, and color image-processing applications. In this paper the finite-state-machine approach is applied to one of the 'classical' problems of binary image processing: connected-component analysis. Execution-time results are presented, and compared for several examples to execution times for the very-efficient conventional method based on analysis of run-length-encoded data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A knowledge based system is proposed that will automate the segmentation process for embryonic organogenesis image data. Different methodologies will be incorporated that enable the accurate segmentation of image data based on the 'fingerprint' pattern of a particular cell staining. A novel technique integrates perceptual organization concepts to solve the shortcomings of the medial axis transform at the growing tips of branch structures. The system will then recover the underlying 3D object space axes of growth for two and 3D measurements. Also, perceptual organization groupings are formed by coalescing multiple lines in an image into one structure. These groupings can be described by junction analysis nomenclature and appear in different formations in different organs and at specific times in the development yielding significant biological meaning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Convex hulls have been extensively studied and have been shown to have many useful applications in disciplines such as biomedical imaging, CAD/CAM and computer graphics. A convex hull of a point set, S, is the union of all line segments from p to q where p and q are elements of S. Edelsbrunner et al has extended the convex hull, which has linear constraints, to the alpha hull, which is circularly constrained. Specifically, the alpha-hull is the union of all circular arcs of radius 1/alpha joining p and q where p and q are endpoints of a circular arc. This paper extends the concept of the convex and alpha hulls to allow for extensions to curves of arbitrary complexity. Whereas current definitions assume that the curve connecting p and q is of finite length, we broaden the definitions to include infinite line segments between those points, thus forming the infinite hull. Similar extensions exist for circular and elliptical hulls as well as general curves. It is shown that the infinite hull counterparts can be applied to the characterization of a digital curve in linear time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Many routine pick-and-place tasks require a combinational software analysis approach in that a particular object must first be recognized before orienting a robot gripper or other tool to pick it up. The first step requires the segmentation of pattern feature from the image in order to make the classification. The second step concerns the determination of the position and orientation of the classified object. We present an approach to this two-stage problem that utilizes only the Freeman chain code of the object outline, rather than the image itself. We show that, given the chain code, it is possible to segment a number of specific geometrical pattern features that can be used to identify the object. From the same code, it is further demonstrated that the object location can be specified by computing its center of mass and minor axis of inertia. It is thus possible to identify and locate entities within an image given only their chain codes. The algorithms are demonstrated on a variety of simple shapes. The method is at present restricted to solid shapes, but could be extended to include objects of greater complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper provides an overview of the state of the art of digital shearography. Some detail progress and application examples are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work, a new approach for parallel-hierarchical (PH) networks learning having applied to the real-time image sequences in extended laser paths is proposed. It is possible to synthesize PH network with learning abilities by using the general idea of artificial neural networks structured organization on the scheme: input layer - hidden layer - output layer. The 1st network level should be used as input layer, next levels should be used as a hidden layer and the last level should be used as an output one, as it is traditionally in artificial neural networks, Using the main PH network feature which determine the length of network algorithm it is possible to determine a number of hidden layer elements. And in this way it formalizes the procedure of obtaining the number of hidden layer elements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A real-time face tracking and facial information acquisition system developed for multimodal man-machine communication s presented in this paper. The sadism can track a human face and report mouth position and other facial information in real-time. A stochastic model to characterize the skin color distribution of human skin is used to transform the hue image of the HSI color space to a skin color probability distribution image. A modified mean shift algorithm is then applied to find the mode of the probability distribution, which estimates the face window. To make the system more robust, 1D projections of the intensity image in potential face windows are proposed to verify and adjust face location.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper outlines a new technique for processing structural color texture image to obtain textural units at multiple structural layers. The multi-layered color texture unit segmentation and feature abstraction allows for efficient structural texture classification and synthesis. In the three-phase process, color texture is first quantized and transformed into a grey-scale texture image. An efficient procedure for color feature clustering using PAM is introduced. In the second pahse, the global statistical features of texture are used to determine the texture unit size and the spatial relationship between the texture units in a periodic texture pattern. Finally, for texture units with internal structures, multi-layered segmenter is developed to separated these internal structures at different layers. Feature extraction and synthesis can be conducted at these multiple layers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A real-time image processing and control interface for remote operation of a microscope is presented in this paper. The system has achieved real-time color image display for 640 X 480 pixel images. Multi-resolution image representation can be provided for efficient transmission through the network. Through the control interface the computer can communicate with the programmable microscope via the RS232 serial ports. By choosing one of three scanning patterns, a sequence of images can be saved as BMP or PGM files to record information on an entire microscope slide. The system will be used by medical and graduate students at the University of Medicine and Dentistry of New Jersey for distance learning. It can be used in many network-based telepathology applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Eye-trackers are emerging computer-input devices. This paper describes an experiment to measure the performance an eye- tracker. Detailed analysis of the system and experimental data show that for a typical 'move and select' task, the eye-tracker is two times faster than traditional computer input pointing devices like the mouse. Also, the cognitive start time for the eye-tracker is about 100-200 milliseconds less than that of other input pointing devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wood industry is confronted with problems induced from the raw material itself. Indeed, presenting a lot of growth and sawing up characteristics, with such a variability, wood pieces require unitary inspection. The grading rules recommended by the standards are based on aesthetic and/or physical criterions depending on the foreseen use. But they often have subjective applications. We are focusing here on the mechanical grading, that is the assignment of a class to a sawn timber according to the estimation of three physical and mechanical properties, and the presence of singularities such as knots or wane. A two-sensor approach has ben used for this in-line inspection: a vision device for surface analysis and a microwave device for depth measurements. One linear camera gives profiles so as to calculate timber dimensions and main singularities detection. A microwave sensor with sixteen receptors provides an image of the wood internal structure, from which it is possible to detect knots and to estimate the mechanical properties. Each system computes its own treatment then the result are merged. On the first hand, the fusion produces - the singularities localization by the comparison: redundancy and complementary - the estimation of strength, stiffness and density by the means of multivariate models. On the other hand, the grading rules are translated into computer form: each class is described by its boundaries so as to define which class the current timber belongs to. The paper present the methodology from the rough signals to the class decision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The organization of the hardware and software as well as the application of a remotely operated vision system are all described. The system enables a vision engineer, working a long way from the site of a proposed installation, to experiment with a prototyping image-acquisition sub-system that is already in situ. Once suitable images have been obtained, he then designs a suitable image processing algorithm. In this way, the vision engineer is able to construct a prototype vision system that can be used to test the algorithm in a realistic operating environment, collect performance statistics and possibly refine the algorithm in the light of the experience gained. The objective of this research is to stream-line the design and prototyping phase for on-line factory-floor inspection systems, so that the optimum benefit use can be obtained from the efforts of a skilled vision engineer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is common practice to begin the design of an illumination-viewing sub-system for Machine Vision by experimentation: simply maneuvering the lamps and camera around until a high-contrast image has been obtained. There then follows the process of duplicating that lighting pattern in a rugged, well-engineered rig that will with- stand the rigours of the factory floor. This requires careful measurement, to obtain the same illumination angles. Two alternative approaches are suggested in this paper. In the first, an image is created in a hemispherical mirror, so that the lights appear as bright spots. By measuring the positions of these spots, the geometry of a lighting rig can be determined very quickly and easily. In the second approach, a video camera fitted with a fish-eye lens can be used to obtain a map that is then analyzed in a similar way. A third technique is described for characterizing the lighting when a large, diffusely reflecting object is to be examined. This is also able to provide sufficient data to enable a lighting pattern to be duplicated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robust web inspection and defect detection requires the analysis of the local structure of the texture. This is of special importance if the inspection task is extended to different web types that vary significantly such as silk or wool cloth. We introduce an algorithm that combines local nonlinear invariant features with high discrimination capabilities and statistical classification. In addition, invariance with respect to Euclidean motion is crucial to industrial settings. Thus the features are based on the integration of nonlinear polynomials over the transformation group for which invariance is desired. This result in a feature vector for each pixel of the image that is invariant with respect to translation and rotation. Local texture variations that appear naturally in certain cloth types like wool therefore influence the feature space only partially depending on the design of the functions. Nonlinear functions have been shown to extend the feature space compared to linear functions. This improves the discrimination power of the feature with respect to other textile types. As a next step, the features are presented to a fully connected multi-layer perceptron network to classify the web defects. For network training, the error regions are manually marked on the original images and labeled according to the error classes. The images are divided into small patches for which the feature vectors are computed. To address non-error textural variations these regions are split up into several parts and trained separately. Experimental result son a database of 3200 textile images show a high separation capability of the invariant feature for classification of the defects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Crosshead Inspection System, CIS, utilizes machine vision technology for on-line inspection of a diesel engine component - a crosshead. The system includes three functional modules. 1) Part handling subsystem - presents parts for inspection and accepts or rejects them based on signals for the image analysis software. 2) Image acquisition hardware - Optics, light sources and two video cameras collect images of inspected parts. 3) Image analysis software - analyzes the images and sends pass/fail decision signals to the handling subsystem. The CIS acquires and inspects two images of each part. The upper camera generates an image of the part's top surface, while the lower camera generates an image of the so-called 'pockets' of the lower half. Both images are acquired when a part-in-place signal is received from the handling system. The surface inspection camera and light source are positioned at opposed low angles relative to the surface. Irregularities manifest themselves as shadows on the surface image. These shadows are detected, measured and compared to user specifications. The pocket inspection detects the presence of tumbler stones. The contrast of these stones is enhanced with circularly polarized lighting and imaging. The graphical user interface of the CIS provides easy setup and debugging of the image processing algorithms. A database module collects, archives and present part inspection statistics to the user. The inspection rate is sixty parts per minute.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A common machine vision task is to verify that a product has been properly fabricated or assembled. In this paper, a vision system is described for confirming that a type of gear has been processed properly. The main problem in this application is the relatively large depth of the gear which result in a more complex image than the desired silhouette of the pat. The resulting image shows a portion of the inner wall due to the varying magnification because some points on this wall are closer to the lines than others. Employing telecentric optics, however, greatly simplifies the problem since only light rays parallel to the optical axis create the image so that a good silhouette of the part is obtained. Gear teeth can then be isolated using mathematical morphology techniques to verify that the part has been properly broached.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ball grid array (BGA) chip is widely used in high density printed circuit board (PCB). However, inspection of defects in the solder joints is difficult by visual or a normal x-ray imaging method, because unlike conventional packages, solder joints of the BGA are located underneath its own package and ball type leads. Therefore, x-ray digital tomosynthesis (DT), which form a cross-sectional image of 3D objects, is needed to image and inspect the solder joints of BGA. In this paper, we propose a series of algorithms for inspecting the solder joints of BGA by using x-ray cross-sectional images that are acquired from the developed DT system. BGA solder joints are examined to check the alignment between the chip and pad on a PCB, bridge, adequate solder volume. The volume of the solder joint is represented by a gray level in the x-ray images: thus solder joints can be examined by use of the gray-level profiles of each joint. To inspect and classify various defects, pattern classification method using a learning vector quantization neural network and a look up table is proposed. The clusters into which a gray-level profile is classified are generated by the learning process of the network by using a number of sampled gray-level profiles. A series of these developed algorithms for inspecting and classifying defects were tested on a number of BGA solder joints. The experimental results show that the proposed method yields satisfactory solutions for inspection based on x-ray cross-sectional images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the welded area extracting in automatic recognition of weld defects had been investigated according to the feature of weld image. The image enhancement based on self-adaptive subsection and edge correcting of weld was presented. Defect rectangle and end rectangle were defined for correction shape and size of the defects. The methods of defects rectangle forming, cycle of segmentation and growth and defect edge extracting in the end rectangle were designed to insure the veracity of defects size and shape. A satisfied result was obtained on welded area extracting and a high fidelity of defects was obtained.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The report considers optical techniques for automated verification of object's identity designed for control system of nuclear objects. There are presented results of experimental researches and results of development of pattern recognition techniques carried out under the ISTC project number 772 with the purpose of identification of unique feature of surface structure of a controlled object and effects of its random treatment. Possibilities of industrial introduction of the developed technologies in frames of USA and Russia laboratories' lab-to-lab cooperation, including development of up-to-date systems for nuclear material control and accounting are examined.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Imaging systems employed in demanding industrial and military applications such as computer vision and automatic target recognition typically require real-time high- performance computing resources. While these system have traditionally relied on proprietary architectures and custom components, recent advances in high-performance general- purpose microprocessor technology have produced an abundance of low cost components suitable for use in high-performance computing systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The objective of the system is inspection of individual pieces of stemware for geometry defects and glass imperfections. Cameras view stemware from multiple angles to increase surface coverage. The inspection images are acquired at three stations. The first inspects internal glass quality, detecting defects such as chemical residue and waviness. The second inspects the rim, geometry of the stemware body and stem, and internal defects such as cracks. The third station inspects the stemware base for geometrical and internal defects. Glass defects are optically enhanced through the use of striped pattern back lighting combined with morphological processing. Geometry inspection is enhanced through the use of converging illumination at the second station, while the third station utilizes large field true telecentric imaging. Progressive scan cameras and frame grabbers capable of simultaneous image capture are used at each station. The system software comprises six modules: system manager, I/O manager, inspection module for each station, and stemware sorting and logging module. Each module is run as a separate application. Applications communicate with each other through TCP/IP sockets, and can be run in a multi-computer or single-computer setup. Currently two Windows NT workstations are used to host the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital cameras have been around for many years, but a new breed of consumer market cameras is hitting the main stream. By using these devices, system designers and integrators will be well posited to take advantage of technological advances developed to support multimedia and imaging applications on the PC platform. Having these new cameras on the consumer market means lower cost, but it does not necessarily guarantee ease of integration. There are many issues that need to be accounted for like image quality, maintainable frame rates, image size and resolution, supported operating system, and ease of software integration. This paper will describe briefly a couple of the consumer digital standards, and then discuss some of the advantages and pitfalls of integrating both USB and Firewire cameras into computer/machine vision applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The application of machine vision system to industrial manufacturing and inspection processes has motivate the development of intelligent and yet flexible decision making processes. When working with highly uniform product, most of the quality or inspection decisions can be based on straightforward but rigid rules once the relevant features have been extracted from the image. However when the product is highly nonuniform, other techniques must be applied to allow for product variability while still being capable of identifying and classifying defects. This paper will investigate methods for accomplishing this based on soft computing. A discussion of the general approach and then a specific methods for accomplishing this based on soft computing. A discussion of the general approach and then a specific examples of an integrated system for product quality determination is presented. This system combines color image processing and feature extraction with neural network classifiers and fuzzy logic based decision outputs to allow for maximum flexibility in accommodating product variability while still maintaining quality standards. The techniques for optimizing the classification parameters and the determination of the fuzzy logic membership functions and user rules are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new system for inspecting machine parts with 3D characteristics is presented. The system consists of a binocular stereo mount. a three degrees of freedom positioning device and a structured light system. By means of a n interactive interface the user may define the specific features to inspect, such as planes, holes, angles and other aspects such as tolerance zones or reference systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The work presented in this paper is pat of a system developed for fruit sorting. The machine vision unit is part of a distributed control system in which several machine vision modules can be integrated with a control module and a user interface unit. The control module takes care of the distributed control of the conveyor belt, weight units and fruit output units. The user interface is a front end to the user who can watch and control any part of the distributed system. The machine vision units are connected through a LAN to the user interface and through a CAN bus to the control unit in order to send and receive real time information during the on-line sorting process. Information that does not need real time communications are sent through the LAN under an ethernet protocol.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.