This study aims to establish an error model of the stereo measurement system considering camera vibration.
At first, we verified the distribution of disparity error under the circumstance without the camera vibration and with the camera vibration. As the result, we found that we can approximate the distribution of disparity error by normal distribution under the circumstance without camera vibration and with camera vibration. And, the parameters of normal distribution are changed by the camera vibration.
The parameters of the distribution of the measurement error are average μ and standard deviation σ. The parameters of the camera vibration are considered amplitude A and frequency F. In order to verify relationships during the parameters of the distribution of measurement error and the parameters of the camera vibration, we experimented using the vibration testing system. We imposed simple harmonic motion to the stereo camera. In this paper, we use stereo camera Bumblebee. As the result of experiment, the camera vibration didn't affect average μ. We found positively correlation between standard deviation σ and amplitude A. And, we found negatively correlation between standard deviation σ and frequency F. We estimate the parameters of measurement error by the parameter of the camera vibration using these relationships. So, we establish the error model of the stereo measurement system. Moreover, we define existing probability of object using the parameter of measurement error.
This paper proposes a novel focus measure based on self-matching methods. A unique pencil-shaped profile is identified by comparing the similarity between patterns extracted around their neighborhood in each scene. Based on this, a new criterion function, CPV, is defined to evaluate focused or defocused scenes. OCM is recommended due to its invariance with regards to contrasts. Experiments using a telecentric lens are implemented to demonstrate the efficiency of proposed measure. Comparing OCM-based focus measure with conventional focus measures shows that OCM-based CPV is robust against illuminations. Using this method, pan-focused images are composed and depth information is represented.
KEYWORDS: 3D metrology, Robots, Mobile robots, Fluctuations and noise, Stereoscopic cameras, Distance measurement, Statistical analysis, Information science, Agriculture, 3D image processing
Due to the area of the vineyard in Hokkaido is extremely large, it is very difficult and hard to eradicate weeds
by human being. In order to solve this problem, we developed a dynamic image measure technique, which can
be applied to the weeding robots in vineyards. The outstanding of this technique is that it can discriminate the
weed and the trunk correctly and efficiently. Meanwhile, we also attempt to measure the root of trunk accurately.
And a new method to measure the blocked trunk of grapes in vineyards has also been developed in this paper.
This paper has mainly discussed about two problems, object focusing and depth measurement. First, we propose a novel and robust scheme of image focusing by introducing a new measure of focusing based on Orientation code matching. A new evaluation function, named Complemental pencil volume, CPV, is defined and calculated to represent local sharpness of images, either in or out of focus, by comparing the similarity between any patterns extracted at the same position within their own scenes. An identified and unique maximum or peek, which of ill-condition scenes with low contrast observations. Experiments show that the OCM-based focusing is very robust to change in brightness, and to even more irregularities in the real imaging system, like dark condition. Second, based on this robust focusing technique, we applied it to an image sequence of an object surface to measure the depth of profile. A simple plane object surface has been implemented to demonstrate the basic approach. The results showed the successful and precision depth measurement of this object.
Instead of tachometer-type velocity sensors, an effective method of estimating real-time velocities based on a robust image matching algorithm is proposed to measure real-time velocities of agrimotors or working machines, such as sprayers and harvesters, driven by them in real farm fields.
It should be precise even they have any slipping, and stable and robust to many ill-conditions occurred in the real world farm fields, where the robust and fast algorithm for image matching, Orientation code matching is effectively utilized.
A prototype system has been designed for real-time estimation of velocities of the agrimotors and then the effectiveness has been verified through many frames obtained from the real fields in various weather and ground conditions.
Quality of local regions in the scene could be deteriorated by the effects of ill-conditioned lighting effects and reflection. We develop a method to improve the quality of local regions. The local region of the image is firstly selected by an algorithm that is based on similarity evaluation of the region. Then the histogram
of brightness and saturation of the local region are expanded using the histogram equalization, both the
brightness and saturation are improved. In the next step, based on the distance from the point selected
by user, the improved data are combined with the original image. The local part of the image is naturally
merged with the surrounding scenes.
This study aims to expand a measuring range of stereovision system. In the previous paper, the authors achieved a 3D motion capture system by using one camera with triangle markers, named as Mono-MoCap (MMC). MMC has two features. One is that MMC can measure 3D positions of subjects by using one camera on the basis of the perspective n point (PnP) problem. Another is that MMC does not need to recalibrate the camera parameters. In this paper, the authors apply MMC to binocular stereovision system for expanding its measurement range. MMC will solve the problem that stereovision system can not measure 3D positions of objects when even one camera does not capture the objects. In this study, 3D positions of three points where a geometrical relation each other is already-known will be measured by a stereovision. 3D position measurement by the stereovision is enabled by secondarily using MMC when a part of the object is hidden with one camera of the stereovision. Simulation and experimental results will show the effectiveness of the proposed method for expanding the measuring range of stereovision system.
We propose self-maintenance robot system as a method which realizes work for a long time without maintenance by the human workers. This system absorbs the change which occurs in robot's hardware by learning, and maintains working ability. We propose the two methods of learning changes in the physical information of the robot as methods which realizes the maintenance-free robot system. One is a method to learn robot's physical information based on the input and output information in the task practice from the no physical information of the robot by using a neural network which has a task common layer and a task independence layer. We use a neural network which has a task common layer and a task independence layer to learning. Other is a method to learn robot's physical information based on the difference in hoping action and actual action. In this report, we verify of these learning system by the computer simulation.
A rotation-invariant template matching scheme using Orientation Code Difference Histogram (OCDH) is proposed. Orientation code features based on local distributions of pixel brightness are substantively robust against furious change in illumination plays a main role in designing the rotation-invariant matching algorithm. Since every difference between any pair of orientation codes is invariant in rotation of an image, we can elaborate a histogram feature by use of the differences, which can aggregate effective clues for searching rotated images through simple procedures. With gray scale images as targets, rotation angles of an image can be accurately estimated by the proposed method. It is fast and robust even in presence of some irregularities as brightness change by shading or highlighting. We propose a two-stage framework for realizing the rotation-invariant template matching based on OCDH. In the first stage, candidate positions are selected through evaluation of OCDH at every position, and then in the second stage, they are tested by use of a verification also based on orientation code features. The effectiveness of the proposed matching method has been shown through many kinds of experiments designed with real world images.
We propose an efficient template matching algorithm for binary image search. When we use template matching techniques, the computation cost depends on size of images. If we have large size images, we spend a lot of time for searching similar objects in scene image to template image. We design a scanning-type upper limit estimation that can be useful for neglect correlation calculation. For calculating the scanning-type upper limits, template and scene images are divided into two regions: R-region and P-region. In R-region, an upper limit of correlation coefficients can be derived as an interval estimation based on mathematical analysis of correlations of the object image and a pivot image. In P-region, another upper limit is formalized based on the number of white and black pixels in a template and the object image. By use of these upper limits, the scanning-type upper limit estimation of correlation coefficients can be formalized for the efficient matching algorithm. This upper limits estimation isn't over true values of correlation, so the accuracy of search by conventional search is the same as one by conventional search. The experiments with document images show the effectiveness and efficiency of the proposed matching algorithm. In these experiments, computation time by the proposed algorithm is between 5 and 20% compare of the conventional search.
A novel successive learning algorithm is proposed for efficiently handling sequentially provided training data based on Test Feature Classifier (TFC), which is non-parametric and effective even for small data. We have proposed a novel classifier TFC utilizing prime test features (PTF) which is combination feature subsets for getting excellent performance. TFC has characteristics as follows: non-parametric learning, no mis-classification of training data. And then, in some real-world problems, the effectiveness of TFC is confirmed through way applications. However, TFC has a problem that it must be reconstructed even when any sub-set of data is changed.
In the successive learning, after recognition of a set of unknown objects, they are fed into the classifier in order to obtain a modified classifier. We propose an efficient algorithm for reconstruction of PTFs, which is formalized in cases of addition and deletion of training data. In the verification experiment, using the successive learning algorithm, we can save about 70% on the total computational cost in comparison with a batch learning. We applied the proposed successive TFC to dynamic recognition problems from which the characteristic of training data changes with progress of time, and examine the characteristic by the fundamental experiments. Support Vector Machine (SVM) which is well established in algorithm and on practical application, was compared with the proposed successive TFC. And successive TFC indicated high performance compared with SVM.
Feature extraction and tracking are widely applied in the industrial world of today. It is still an important topic in Machine Vision. In this paper, we present a new feature extraction and tracking method which is robust against illumination change such as shading and highlighting, scaling and rotation of objects. The method is composed mainly of two algorithms: Entropy Filter and Orientation Code Matching (OCM). The Entropy Filter points up areas of images being messy distribution of orientation codes. The orientation code is determined by detecting the orientation of maximum intensity change around neighboring 8 pixels. It is defined as simply integral values. We can extract good features to track from the images by using the Entropy Filter. And then, the OCM, a template matching method using the orientation code, is applied to track the features each frame. We can track the features robustly against the illumination change by using the OCM. Moreover, updating these features (templates) each frame allows complicated motions of tracked objects such as scaling, rotation and so on. In this paper, we report the details of our algorithms and the evaluations of comparison with other well-known feature extraction and tracking methods. As an application example, planer landmarks and face tracking is tried. The results of them are also reported in context.
This paper aims to propose a fast image searching method from environmental observation images even in the presence of scale changes. A new scheme has been proposed for extracting feature areas as tags based on a robust image registration algorithm called Orientation code matching. Extracted tags are stored as template
images and utilized in tag searching. As the number of tags grows, the searching cost becomes a serious problem. Additionally, change in viewing positions cause scale change of an image and matching failure. In our scheme, richness in features is important for tag generation and the entropy is used to evaluate the diversity of edge
directions which are stable to scale change of the image. This characteristic contributes to limitation of searching area and reduction in calculation costs. Scaling factors are estimated by orientation code density which means the percentage of effective codes in fixed size tag areas. An estimated scaling factor is applied to matching a scale of template images to one of observation images. Some experiments are performed in order to compare computation time and verify effectiveness of estimated scaling factor using real scenes.
This study aims to realize a motion capture for measuring 3D human motions by using single camera. Although motion capture by using multiple cameras is widely used in sports field, medical field,
engineering field and so on, optical motion capture method with one camera is not established. In this paper, the authors achieved a 3D motion capture by using one camera, named as Mono-MoCap (MMC), on the basis of two calibration methods and triangle markers which each length of side is given. The camera calibration methods made 3D coordinates transformation parameter and a lens distortion parameter with Modified DLT method. The triangle markers enabled to calculate a coordinate value of a depth direction on a camera coordinate. Experiments of 3D position measurement by using the MMC on a measurement space of cubic 2 m on each side show an average error of measurement of a center of gravity of a triangle marker was less than 2 mm. As compared with conventional motion capture method by using multiple cameras, the MMC has enough accuracy for 3D measurement. Also, by putting a triangle marker on each human joint, the MMC was able to capture a walking motion, a standing-up motion and a bending and stretching motion. In addition, a method using a triangle marker together with conventional spherical markers was proposed. Finally, a method to estimate a position of a marker by measuring the velocity of the marker was proposed in order to improve the accuracy of MMC.
Biogenic measurement has been studied as a robot's interface. We have studied the wearable sensor suit as a robot's interface. Some kinds of sensor disks are embedded the sensor suit to the wet suit-like material. The sensor suit measures a wearing person's joint, and muscular activity. In this report, we aim to establish an auto-calibration system for measuring joint torques by using EMG sensors based on neural network and sensor disks of a lattice. The Torque presumption was performed using the share neural network, which learned the data that formed the whole subject's teacher data. Additional training of the share neural network was carried out using the individual teaching data. As a result, that was able to do the neural network training in short time, high probability and high accuracy to training of initial neural network. Moreover, high-presumed accuracy was able to be acquired by this method Next, Sensor disks of a lattice was developed. EMG is measurable, checking the state of an electrode by that can measure biogenic impedance. That was able to measure EMG by sensor disks which has low impedance We measured EMG and joint torque by trial production sensor suit and torque measuring instrument. The predominancy of the torque presumption using the share neural network was check. We proposed Measurement system, which consists sensor disk of lattice. Experimental results show the proposed method is effective for the auto-calibration.
We have proposed a brand-new noninvasive ultrasonic sensor for measuring muscle activities named as Ultrasonic Muscle Activity Sensor (UMS). In the previous paper, the authors achieved to accurately estimate joint torque by using UMS and electromyogram (EMG) which is one of the most popular sensors. This paper aims to realize to measure not only joint torque also joint angle by using UMS and EMG. In order to estimate torque and angle of a knee joint, muscle activities of quadriceps femoris and biceps femoris were measured by both UMS and EMG. These targeted muscles are related to contraction and extension of knee joint. Simultaneously, actual torque on the knee joint caused by these muscles was measured by using torque sensor. The knee joint angle was fixed by torque sensor in the experiment, therefore the measurement was in isometric state.
In the result, we found that the estimated torque and angle have high correlation coefficient to actual torque and angle. This means that the sensor can be used for angle estimation as well torque estimation. Therefore, it is shown that the combined use of UMS and EMG is effective to torque and angle estimation.
Wearable robots, especially power suits to enhance human activity are one of the most interesting and important topics. This study aims t o develop a wearable robot that is small-size, light-weight for improving human perfor- mance and reducting muscle fatigue. So we proposed smart suit with variable stiffness mechanism that utilize
elastic forces for assist and make assistance control by impedance control. Because of to utilize elastic forces for assist, the capacity of the suit do not reliance on weight of actuators and their's energy source well than conventional power suits. In consequence, we think the suit can realize miniaturization and getting light-weight. In a previous study, we verified the effectiveness of smart suit with variable stiffness mechanism by experiments and simulations in order to design the suit which can tune the stiffness of joint mechanically, and had been able to confirm the effectiveness. Based on these results, we design the smart suit with variable stiffness mechanism that be able to control number of working spring by small actuator, and at any knee joint angle, elastic energy occurrence is variable by displacement angle of ankle joint. We could obtain a result of the output per a mass of the suit is more large than conventional power suits. And we confirm that reducting muscle fatigue by experiments on knee bends and walking in case that subjects wear the suit. In this paper, we show the suit that we developed and effectiveness of the suit for human working.
This paper aims to propose a new scheme for robust tagging for landmark definition in unknown circumstance using some qualitative evaluations based on Orientation Code representation and matching which has been proposed for robust image registration even in the presence of change in illumination and occlusion. Necessary characteristics for effective tags: richness, similarity, and uniqueness, are considered in order to design an algorithm for tag extraction. These qualitative considerations can be utilized to design simple and robust algorithm for tag definition in combination with the robust image registration algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.