PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 8349, including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, ISOMAP algorithm is applied into anomaly detection on the basis of feature analysis in hyperspectral
images. Then an improved ISOMAP algorithm is developed against the limitation existed in ISOMAP algorithm. The
improved ISOMAP algorithm selects neighborhood according to spectral angel, thus avoiding the instability of the
neighborhood in the high-dimension spectral space. Experimental results show the effectiveness of the algorithm in
improving the detection performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Domain ontology is a descriptive representation of any particular domain which in detail describes the concepts in a
domain, the relationships among those concepts and organizes them in a hierarchal manner. It is also defined as a
structure of knowledge, used as a means of knowledge sharing to the community. An Important aspect of using
ontologies is to make information retrieval more accurate and efficient.
Thousands of domain ontologies from all around the world are available online on ontology repositories. Ontology
repositories like SWOOGLE currently have over 1000 ontologies covering a wide range of domains. It was found that up
to date there was no ontology available covering the domain of "Sufism". This unavailability of "Sufism" domain
ontology became a motivation factor for this research. This research came up with a working "Sufism" domain ontology
as well a framework, design of the proposed framework focuses on the resolution to problems which were experienced
while creating the "Sufism" ontology. The development and working of the "Sufism" domain ontology are covered in
detail in this research.
The word "Sufism" is a term which refers to Islamic mysticism. One of the reasons to choose "Sufism" for ontology
creation is its global curiosity. This research has also managed to create some individuals which inherit the concepts
from the "Sufism" ontology. The creation of individuals helps to demonstrate the efficient and precise retrieval of data
from the "Sufism" domain ontology. The experiment of creating the "Sufism" domain ontology was carried out on a tool
called Protégé. Protégé is a tool which is used for ontology creation, editing and it is open source.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are some advantages to describe knowledge with discernibility matrix ,As for the discernibility matrix of
decision tables, this paper finds that there exist that some redundant information of discernibility matrix. so this paper
defines a new discernibility matrix ,which is named Simple Discernibility matrix .it has no redundant information and
could reduce the number of comparisons of individuals ,and improve the efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Inter-class testing is the testing of classes for composing an object-oriented system or subsystem during integration.
MM Path is defined as an interleaved sequence of method executions linked by messages. It represents the interactions
between methods in object-oriented software well, hence fits for object-oriented integration testing. However, the current
MM Path generation methods only support intra-class testing. In this paper, a call-graph-based approach is proposed to
promote MM Path automatic generation from intra-class to inter-class level. The approach is evaluated by controlled
experiments on 12 Java benchmark programs with two typical call graph construction algorithms, Class Hierarchy
Analysis and Anderson's Points-to Analysis. Then, the impact of the two algorithms on inter-class MM path generation
efficiency is studied. The result shows that our approach is practicable and Anderson's Points-to Analysis outperforms
Class Hierarchy Analysis for inter-class MM Path generation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Moral education website offers an available solution to low transmission speed and small influence areas of
traditional moral education. The aim of this paper is to illustrate the design of one moral education website and the
advantages of using it to help moral teaching. The reason for moral education website was discussed at the beginning of
this paper. Development tools were introduced. The system design was illustrated with module design and database
design. How to access data in SQL Server database are discussed in details. Finally a conclusion was made based on the
discussions in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Web services have gained an increasing popularity, however, also shown challenge in appropriate web service
selection. Because of the wide variety of web services might be offered to perform one specific task, it is essential that
users are supported in the selection of appropriate web services. Keyword based web service discovery, one of the most
practical approaches, ignores the reaction of user context which is considered to be valuable. In this paper, we propose a
user context model which enables user context to participate in the process of web service discovery. Our main
contribution is a user context utilization method which improves the accuracy of keyword based approaches.
Experiments and results are provided to evaluate the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The article put forward a method which had been used for video image acquisition and processing, and a system
based on Java media framework (JMF) had been implemented by it. The method could be achieved not only by B/S
mode but also by C/S mode taking advantage of the predominance of the Java language. Some key issues such as
locating video data source, playing video, video image acquisition and processing and so on had been expatiated in
detail. The operation results of the system show that this method is fully compatible with common video capture device.
At the same time the system possesses many excellences as lower cost, more powerful, easier to develop and cross-platform
etc. Finally the application prospect of the method which is based on java and JMF is pointed out.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Web image search results usually contain duplicate copies. This paper considers the problem of detecting and
clustering duplicate images contained in web image search results. Detecting and clustering the duplicate images together
facilitates users' viewing. A novel method is presented in this paper to detect and cluster duplicate images by measuring
similarity between their topics. More specifically, images are viewed as documents consisting of visual words formed by
vector quantizing the affine invariant visual features. Then a statistical model widely used in text domain, the
PLSA(Probabilistic Latent Semantic Analysis) model, is utilized to map images into a probabilistic latent semantic space.
Because the main content remains unchanged despite small digital alteration, duplicate images will be close to each other
in the derived semantic space. Based on this, a simple clustering process can successfully detect duplicate images and
cluster them together. Comparing to those methods based on comparison between hash value of visual words, this method
is more robust to the visual feature level alteration posed on the images. Experiments demonstrates the effectiveness of this
method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we develop a new image denoising method based on block-matching and transform-domain filtering.
The developed method is derived from the current state-of-the-art denoising method (BM3D). We separate the 3D
transform in the original method to two steps 1D transform, to further enhance the sparsity for signals whose elements
are highly similar and to weaken the sparsity for those signals whose elements are dissimilar. Because the 1D filtering is
on highly similar elements and the 2D filtering on image blocks are all removed, the image details can be better reserved
and fewer artifacts are introduced than original method. Experimental results demonstrate that the developed method is
competitive and better than some of the current state-of-the-art denoising methods in terms of peak signal-to-noise ratio,
structural similarity, and subjective visual quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Using SAR to monitor oil spill is a useful method. As the performance of oil spill on SAR images is similar with other
oceanic phenomenon, it is difficult to distinguish between oil spill and "look-alikes". This paper presents a novel multi-
level method to extract the oil film from original SAR image. The method can retain original edge information of oil
film while separated it from the background. This paper applied lee filter, fuzzy c means, coherence filter and morphological
operations to de-noise and segment SAR image. Experimental results show that the method can not only distinguish
oil film and the sea, but revert part of edge information lost in the process of de-noising and the segmentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aimed projective distortion caused by tilted camera in traditional photogrammetry, a projective distortion self-corrected
photogrammetry is proposed used to measure the projectile coordinates. The causes of projective distortion and
the impact on the measurement accuracy are analyzed. Through the analysis of the imaging model of random points on
the target, The formula to solve the object distance is deduced. The correspondence between the coordinates on the
image and the coordinate on the target of the projectiles is established. According to Extraction and calculation of 4
markers sited on the target, the constant in the correspondence is solved. The measurement error caused by projective
distortion is overcome. The absolute measurement of projectile coordinate on the target is realized. The test result shows
that the projectile coordinate on the target can be accurately measured using the method designed in this paper. The
accuracy and consistency of fire intensity measuring is increased. It also can be used to check the other related automatic
measuring equipments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an algorithm to detect the eyes and mouth of the faces under various imaging conditions such as
with different poses, dimensions, illuminations, resolutions, wearing glasses or not and having different expressions is
proposed. Initially, the algorithm converted the face region detected by the Viola-Jones algorithm into gray-scale and
reduced the resolution by decimation to achieve low resolution region. The pixels that are darker than their surroundings
are later located by 8 individual filters. Finally, the center of eyes and mouth are searched using a triangle model.
Experiment show that, the algorithm produced 98.7% accuracy in detecting the eyes and mouth.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An image registration method is proposed in this paper for accurately aligning two images of the same scene
captured simultaneously by visible CCD and IR (infrared) cameras. In image fusion systems, CCD and IR sensors are
physically aligned as closely as possible and yet significant image mis-alignment remains due to differences in field of
view, lens distortion and other camera characteristics. So, affine transformation is considered for aligning two images. At
first, the corresponding feature point pairs are selected manually and that are used to calculate transform coefficients.
Then, the value of the transform coefficients are optimized further by MI (mutual information) in global image. After the
unregistered image is transformed by optimal transform coefficients, the accurately registered image can be obtained.
The experimental results demonstrate that this method can provide highly accurate registered images for image fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Super-resolution image reconstruction aims to obtain a high-resolution image from multiple low-resolution images to
get a better visual effect. It is widely used in many applications such as video surveillance, image recognition. This paper
proposes a POCS super-resolution image reconstruction algorithm based on the projection residue to improve the visual
effect of reconstructed HR image. The characteristic of this method is that it takes full advantage of the statistical
property of projection residue included in the constraints and changes the modified threshold adaptively. The cause of
the projection residue is analyzed and its characteristic parameter is calculated to restrain the solution. Experimental
results show that our algorithm is effective in visual evaluation, and the PSNR is improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Using optical sensor array, a precision motion control system in a conveyer follows the irregular shaped leather sheet
to measure its surface area. In operation, irregular shaped leather sheet passes on conveyer belt and optical sensor array
detects the leather sheet edge. In this way outside curvature of the leather sheet is detected and is then feed to the
controller to measure its approximate area. Such system can measure irregular shapes, by neglecting rounded corners,
ellipses etc.
To minimize the error in calculating surface area of irregular curve to the above mentioned system, the motion
control system only requires the footprint of the optical sensor to be small and the distance between the sensors is to be
minimized.
In the proposed technique surface area measurement of irregular shaped leather sheet is done by defining velocity
and detecting position of the move. The motion controller takes the information and creates the necessary edge profile on
point-to-point bases. As a result irregular shape of leather sheet is mapped and is then feed to the controller to calculate
surface area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel and efficient method called marked bounding box method based on marching cubes is presented for the
point cloud data reduction of sole patterns. This method is characterized in that each bounding box is marked with an
index during the process of data reduction and later for use of data reconstruction. The data reconstruction is
implemented from the simplified data set by using triangular meshes, the indices being used to search the nearest points
from adjacent bounding boxes. Afterwards, the normal vectors are estimated to determine the strength and direction of
the surface reflected light. The proposed method is used in a sole pattern classification and query system which uses
OpenGL under Visual C++ to render the image of sole patterns. Digital results are given to demonstrate the efficiency
and novelty of our method. Finally, conclusion and discussions are made.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiple-exposure-based methods have been an effective means for high dynamic range (HDR) imaging technology.
The current methods are greatly dependent on tone mapping, and most of them are unable to accurately recover the local
details and colors of the scene. In this work, we present a novel HDR method by using multiple image cues for the image
merging process. Firstly, all the images with various exposure times are divided into some uniform sub-regions and an
exposure estimation technique is implemented to judge the well exposed one. With all the image blocks have best
exposing quality are selected, a blending function is proposed to remove the transition boundaries between these blocks.
A fidelity metric index is introduced to assess the final fusion image, and experimental results on public image libraries
are given to demonstrate its high performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new method of image denoising was presented via combining contourlet transform and the stationary wavelet
transform which has an iterative filer bank with stationary wavelet transform and the directional filter banks (DFB). The
experimental results show that in terms of image denoising, the proposed method can suppress speckle in SAR images
effectively while preserving the edge features and textural information of the scene.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It's reviewed that the measurement system of road surface profile and the calculation method of segment road test data
have been introduced. Because of there are sudden vertical steps at the connection points of segment data which will
influence the application of road surface data in automotive engineering. So a new smooth connection method of segment
test data is proposed which revised the sudden vertical steps connection by the Signal Local Baseline Adjustment
(SLBA) method. Besides, there is an actual example which mentioned the detailed process of the smooth connection of
segment test data by the SLBA method and the adjusting results at these connection points. The application and calculation
results show that the SLBA method is simple and has achieved obvious effect in smooth connection of the segment
road test data. The method of SLBA can be widely applied to segment road surface data processing or the long period
vibration signal processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of this paper is to provide the line drawing algorithm which is accurate and effective in dissimilar hardware
platforms and different application requirements. The most famous algorithm to draw a straightness line in a smooth
fashion is Bresenham algorithm. It is advantageous that the classic Bresenham algorithm processes are all integer
numeric without division and decimal fraction. Meanwhile, it is imperfect that the algorithm generates only one pixel at a
computation time. So a common inevitable phenomenon of the algorithm is its slow efficiency to some extend. Firstly,
this paper fully analyzes recent researches of the Bresenham Line Drawing algorithm. Secondly, in this paper we give
full attention to the initiative relation between line generation model and its linear slope, and then we present an
improved algorithm which can generate pixels of a line row-major by raster graphics display device. What needs to
stress, the core principle of the improved algorithm is utilizing the counterpart of both ends of line and the symmetry for
segments. Thirdly, after discussing theory and structure, the improved algorithm implementation and simulation are
given. The corresponding project, it means that the hardware acceleration in the use of circular subtraction technology
based on shift register was briefly described. Finally, results were presented to demonstrate that the new algorithm
inherits the advantage of classic Bresenham algorithm without division and decimal fraction, the speed has been
increased greatly, and it is easy to implement by hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The current most desirable image retrieval feature is retrieving images based on their semantic content. In order to
improve the retrieval accuracy of content-based image retrieval systems, research focus has been shifted from designing
sophisticated low-level feature extraction algorithms to reducing the 'semantic gap' between the visual features and the
richness of human semantics. In this paper, we put forward a system framework of image retrieval based on content and
ontology, which has the potential to fully describe the semantic content of an image, allowing the similarity between
images and retrieval query to be computed accurately. In the system, we identify third major categories of techniques in
narrowing down the "semantic gap": (1) using object ontology to define high-level concepts; (2) using machine learning
methods to associate low-level features with query concepts; (3) using ontology reasoning to extend image retrieval.
Finally, the paper does some testing experiment, whose result shows the feasibility of the system framework.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Combined with the newest Machine Vision technology, various characteristic algorithms for border detection are introduced. The norm for the quality of border detection algorithms and the direction of the new algorithms are proposed in the end.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new fuzzy based thresholding method for medical images especially cervical cytology images having blob and
mosaic structures is proposed in this paper. Many existing thresholding algorithms may segment either blob or mosaic
images but there aren't any single algorithm that can do both. In this paper, an input cervical cytology image is binarized,
preprocessed and the pixel value with minimum Fuzzy Gaussian Index is identified as an optimal threshold value and
used for segmentation. The proposed technique is tested on various cervical cytology images having blob or mosaic
structures, compared with various existing algorithms and proved better than the existing algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper aims to solve the problem of detecting ghost object; which is a common problem in background
subtraction algorithm. Ghost object is the false object detected which is not corresponding to any actual object in current
image. In this work, we proposed ghost detection and removal method using color similarity comparison. Proposed
solution is designed based on the assumption that ghost problem occurs due to the existence of the object in background
image instead of in the current image. We are using color similarity between detected foreground area and its
surrounding area to first determine whether the object appear in background or current image, consequently identify
whether the detected object is a ghost or an actual object. Proposed solution has been tested using various datasets
including PETS2001 and own datasets and it is proved that the proposed method is able to solve ghost problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a variational method for motion segmentation has been proposed, which is realized by first estimate
discontinuity-preserving optical flow field and then segmenting. Specifically, the complementarity between estimation
and segmentation was emphasized, so two main improvements were presented: firstly, a segmentation-oriented diffusion
tensor has been suggested in the optical flow model, which combines the image and flow information and determine the
boundaries more effectively; Secondly, according to the characteristic of vector field, a motion segmentation model with
considerations of both motion boundaries and region information were established. The results were illustrated with
examples of sequence images which show the usefulness of the proposed approach for various problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Robust image partitioning technique with limited space-time requirement is crucial for real-time high-resolution
mobile imaging applications. Simultaneous handling of color components is a stumbling block in segmentation of color
image. This paper presents a strategy for segregating multi-dimensional complexity that resembles human visual
perception. First-order regions are developed employing the distribution of base color (hue) creating meaningful
partitions of the image. Each region being self-contained can be transmitted independently. Further segmentations on
each hue region can be done concurrently with respective local distributions of saturation (S). Finally, each of the subregions
can undergo a third-order segmentation based on intensity (I) distribution at local level. Experiments indicate
that the segregation process improves throughput and quality of segmentation. Self-contained partitions are convenient
for multi-host image sharing as well as progressive reconstruction of images in the receiver.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Automatic brain tissue segmentation is a crucial task in diagnosis and treatment of medical images. This paper
presents a new algorithm to segment different brain tissues, such as white matter (WM), gray matter (GM), cerebral
spinal fluid (CSF), background (BKG), and tumor tissues. The proposed technique uses the modified intraframe coding
yielded from H.264/(AVC), for feature extraction. Extracted features are then imposed to an artificial back propagation
neural network (BPN) classifier to assign each block to its appropriate class. Since the newest coding standard,
H.264/AVC, has the highest compression ratio, it decreases the dimension of extracted features and thus yields to a more
accurate classifier with low computational complexity. The performance of the BPN classifier is evaluated using the
classification accuracy and computational complexity terms. The results show that the proposed technique is more robust
and effective with low computational complexity compared to other recent works.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Basic aim of our study is to analyze the medical image. In computer vision, segmentationRefers to the process of
partitioning a digital image into multiple regions. The goal ofSegmentation is to simplify and/or change the
representation of an image into something thatIs more meaningful and easier to analyze. Image segmentation is typically
used to locateObjects and boundaries (lines, curves, etc.) in images.There is a lot of scope of the analysis that we have
done in our project; our analysis couldBe used for the purpose of monitoring the medical image. Medical imaging refers
to theTechniques and processes used to create images of the human body (or parts thereof) forClinical purposes (medical
procedures seeking to reveal, diagnose or examine disease) orMedical science (including the study of normal anatomy
and function).As a discipline and in its widest sense, it is part of biological imaging and incorporatesRadiology (in the
wider sense), radiological sciences, endoscopy, (medical) thermography, Medical photography and microscopy (e.g. for
human pathological investigations).Measurement and recording techniques which are not primarily designed to produce
images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Birefringence of asymmetric photonic crystal fiber is investigated using the finite element method, under the
circumstance of perturbation. By comparing the value of birefringence on the different value of large air-hole, the results
indicate that when the intrinsic-birefringence is smaller, the random offset of hole-position has large influence on the
birefringence. With the intrinsic-birefringence increasing, the little effect on birefringence regardless of the offset of
hole-position or the variation of hole-diameter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Human face detection plays a vital role in many applications like video surveillance, managing a face image
database, human computer interface among others. This paper proposes a robust algorithm for face detection in still color
images that works well even in a crowded environment. The algorithm uses conjunction of skin color histogram,
morphological processing and geometrical analysis for detecting human faces. To reinforce the accuracy of face
detection, we further identify mouth and eye regions to establish the presence/absence of face in a particular region of
interest.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The illumination variation problem is one of the well-known problems in face recognition in uncontrolled
environment. Due to that both Gabor feature and LTP(local ternary pattern) are testified to be robust to illumination
variations, we proposed a new approach which achieved illumination variable face recognition by combining Gabor
filters with LTP operator. The experimental results compared with the published results on Yale-B and CMU PIE face
database of changing illumination verify the validity of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a Swarm Intelligence based Memetic algorithm for Task Allocation and scheduling in
distributed systems. The tasks scheduling in distributed systems is known as an NP-complete problem. Hence, many
genetic algorithms have been proposed for searching optimal solutions from entire solution space. However, these
existing approaches are going to scan the entire solution space without considering the techniques that can reduce the
complexity of the optimization. Spending too much time for doing scheduling is considered the main shortcoming of
these approaches. Therefore, in this paper memetic algorithm has been used to cope with this shortcoming. With regard
to load balancing efficiently, Bee Colony Optimization (BCO) has been applied as local search in the proposed memetic
algorithm. Extended experimental results demonstrated that the proposed method outperformed the existing GA-based
method in terms of CPU utilization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, an extended projection temporal logic(EPTL), based on a primitive operator prj , is formalized. Further,
as an executable subset of EPTL, an object-oriented MSVL is presented, which extends the temporal logic programming
language MSVL to support object, class, aliasing, inheritance and overloading features. An example of modeling and
simulating digital signal processing is given to illustrate how to use and execute the language.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new hybrid genetic algorithm is generated in this paper, which is based on the simple genetic algorithm. In this
algorithm, some genetic operators such as crossover operator are improved. In the crossover operator, the crossover
method based on threshold and the two-points-crossover method are combined into a new hybrid crossover method. An
example which is Resource-Constrained Project Scheduling Problem (RCPSP) is given, whose activity network, the
execution time and the number of resource required for each activity, selection and crossover operator are also referred.
In addition, there are examples to prove the superior of the new algorithm, which is benefit to speed up the evolution and
get the optimal solution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
By analyzing the dynamic scheduling needs of its inherent nature, made wearable computing based on human-computer
natural interaction forms the basis of EOTAS dynamic scheduling methods, and the targeted building, a new
concept of wearable man-machine cooperative forms, turn around its concrete implementation and application, a color
based on extended fuzzy Petri net EOTAS dynamic scheduling method for the preliminary settlement of the business
operating environment EOTAS field applications of the fast scheduling problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid development of information technology and extensive requirement of network resource sharing,
plenty of resource hotlinking phenomenons appear on the internet. The hotlinking problem not only harms the interests
of legal websites but also leads to a great affection to fair internet environment. The anti-leech technique based on
session identifier is highly secure, but the transmission of session identifier in plaintext form causes some security flaws.
In this paper, a proxy hotlinking technique based on session identifier is introduced firstly to illustrate these security
flaws; next, this paper proposes an improved anti-leech mechanism based on session identifier, the mechanism takes the
random factor as the core and detects hotlinking request using a map table that contains random factor, user's
information and time stamp; at last the paper analyzes the security of mechanism in theory. The result reveals that the
improved mechanism has the merits of simple realization, high security and great flexibility.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A remote video monitoring system based on Java Media Framework (JMF) is put forward in this paper. It is of cross-platform,
lower time-delay and lower bandwidth. The system is consisted of three layers that are data acquisition layer,
service layer and client layer. The hardware of system is connected with local area network and various video devices
can be identified in the system. The software based on Java and JMF is programmed to capture, compress, send, receive
and play video data and can be run on different operating system without modification. H.263 compression algorithm is
adopted and RTP protocol is used to transport video data with RTCP protocol in the system. The client layer can access
to the system by Internet or 3G and has convenient and flexible features. Maintenance personnel can easily supervise the
device status at any time so that the equipments are always in good condition. It is helpful to enhance the competitive
power of power plants.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous
approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work,
we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be
calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different
cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels
and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the
efficiency of our method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
On the asis of User Datagram Protocol (UserDatagram Protocol, UDP), to do some improvement and design a
welding robot network communication protocol (welding robot network communicate protocol: WRNCP), working on
the fields of the transport layer and application layer of TCP / IP protocol. According to the characteristics of video data,
to design the radio push-type (Broadcast Push Model, BPM) transmission method, improving the efficiency and stability
of video transmission.and to designed the network information transmission system, used for real-time control of
welding robot network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-cue-based Camshift algorithm is presented for object tracking in complex scenes. Because color-based
Camshift algorithm suffers from objects with color clutter in backgrounds, this paper integrates motion cue with color
cue to extend its application areas. Moreover, the robust real-time multiresolution algorithm is used to obtain motion
information in dynamic background. Experimental work verifies that the proposed multiple cues strategy really improves
the tracking performance of the classical Camshift using a single cue.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
License plate recognition (LPR) system play an important role in intelligent transportation systems (ITSs). It is difficult
to locate a license plate in complex scene. Our location strategy integrates blue region, vertical texture and contrast
features of LP in the framework of improved visual attention model. We improve visual attention model by changing
normalization and linear combination into feature image binarization and logical operation. Multi-scale center-surround
differences mechanism in visual attention model make the feature extraction robust. Tests on pictures captured by different
equipments under different environments give delightful result, the success rate for location is as high as 95.28%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper introduces a face verification system implementation. In feature extraction, the algorithm is based on a
classical texture descriptor, Local Binary Patterns (LBP). In decision making, a new method is proposed to determine the
Client Dependent threshold (CD-th). Compared with the traditional fixed threshold, it significantly reduces error rate.
Moreover, a symmetry factor is defined to increase frontal face detection rate. And a storage mode is designed to reduce
time consumption in feature extraction. The implemented face verification system requires only one sample per person,
and overcomes the difficulties appearing in multi-sample face verification system, including image capture problem,
storage limitation and time-consumption. The experiments demonstrate the effectiveness of our proposed system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Subscriber radio location techniques for code division multiple access (CDMA) cellular networks has been studied
extensively in recent years. The network-based angle of arrival (AOA), time difference of arrival (TDOA), and time of
arrival (TOA) techniques offer solutions to the position estimation problem. In this paper, The signal processing scheme
of wireless location system based IS-95 is presented, in which TOA of reverse access channel transmissions is measured
using sub-correlation detection algorithms and TOA estimation accuracy is improved using second search. Furthermore,
reverse access channel decoding is implemented used to identify access channel message type, mobile identification, and
dialed number.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapidly increasing of biomedical literature, the deluge of new articles is leading to information overload.
Extracting the available knowledge from the huge amount of biomedical literature has become a major challenge.
GDRMS is developed as a tool that extracts the relationship between disease and gene, gene and gene from biomedical
literatures using text mining technology. It is a ruled-based system which also provides disease-centre network
visualization, constructs the disease-gene database, and represents a gene engine for understanding the function of the
gene. The main focus of GDRMS is to provide a valuable opportunity to explore the relationship between disease and
gene for the research community about etiology of disease.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Large-scale and high-quality Video-On-Demand technology has been a hot topic to researchers. In recent years, the
development of cloud computing theory and virtualization cluster technology also provides a new solution of thoughts to
the construction of large-scale Video-On-Demand system. This paper presents a design of Distributed Video-On-
Demand System Based on Cloud Computing (DCC-VOD System), which can be widely applied to various large and
medium network environment. It also introduces the components of the core technology, focusing on the implementation
of the load balancing server studies for distributed system. Finally, results of comparison tests in the simulated
environment show that this system not only can enhance the server performance and resource utilization, but also
increase the number of accessed users as much as possible. It greatly improves the reliability and stability of the system,
and achieves high cost-effective to meet the current needs of the variety of medium-sized and even the large-scale-sized
network environment of video-on-demand.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fingerprint identification is one of the most important biometric technologies. The performance of the minutiae
extraction and the speed of the fingerprint verification system rely heavily on the quality of the input fingerprint images,
so the enhancement of the low fingerprint is a critical and difficult step in a fingerprint verification system. In this paper
we proposed an effective algorithm for fingerprint enhancement. Firstly we use normalization algorithm to reduce the
variations in gray level values along ridges and valleys. Then we utilize the structure tensor approach to estimate each
pixel of the fingerprint orientations. At last we propose a novel algorithm which combines the advantages of onedimensional
Gabor filtering method and anisotropic method to enhance the fingerprint in recoverable region. The
proposed algorithm has been evaluated on the database of Fingerprint Verification Competition 2004, and the results
show that our algorithm performs within less time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The system uses machine vision technology to inspect logarithmic spiral bevel gears. This is a new non-contact
measurement technique with high precision and efficiency. Use two cameras in a different location to shoot two images
of gear, and collect gear images into the computer. Use correspondence between feature points of two images and optical
imaging geometric model to solve three-dimensional coordinates of the tooth surface points. Then, discriminate the tooth
shape by comparing to ideal surface point parameters. This kind of inspection method flexibility and provides technical
support for processing, detection and correction of logarithmic spiral bevel gears.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a sensor fusion framework for real-time tracking applications combining inertial sensors with a camera.
In order to make clear how to exploit the information in the inertial sensor, two different fusion models gyroscopes only
model and accelerometers model are presented under extended Kalman filter framework. Gyroscopes only model uses
gyroscopes to support the vision-based tracking without considering acceleration measurements. Accelerometers model
utilizes both measurements from the gyroscopes, accelerometers and vision data to estimate the camera pose, velocity,
acceleration and sensor biases. Synthetic data and real image experimental sequences show dramatic improvements in
tracking stability and robustness of estimated motion parameters for gyroscope model, when the accelerometer
measurements exist drift.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recognition of handwritten Uighur word is important for Uighur information automation and new generation
handwritten input system development on mobile platform. Robust and accurate handwritten character segmentation
algorithm provides an important prerequisite for Uighur recognition. Based on the comprehensive consideration of
computation, robustness and the characteristics of the text itself, a simple but effective handwritten Uighur character
segmentation algorithm is proposed. Furthermore, we develop an Uighur input system on the intelligent mobile platform,
and construct a medium scale Uighur handwritten word database simultaneously. The segmentation algorithm is detailed
evaluated on the database and the extensive experiments demonstrate the robustness and efficiency of the proposed
algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An ID-based authenticated group key agreement (AGKA) protocol allows a group of members to share a key and
provide an assurance of key sharing with an intended group with the user's identity, which is used for conferencing
environments. In 2004, Choi et al proposed an ID-based authenticated group key agreement with bilinear maps (also
called CHL protocols), which was extended from Burmester and Desmedt conference key agreement protocols.
Unfortunately, their protocols were found to be vulnerable to the insider attacks in which cases that the two malicious
users have the previous authentication transcripts of the party by Zhang, Chen and Shim. In this paper, we proposed an
improved ID-based AGKA. In our scheme, each session has a unique session identity which is published by Key
Generation Center. With such unique session identity binding to each session, our protocols can prevent the insider
attack. Especially, our protocols can not enhance the computationally cost and it is still efficient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a new paradigm for the design of cryptographic filesystems. Traditionally, cryptographic file
systems have mainly focused on encrypting entire files or directories. In this paper, we envisage encryption at a finer
granularity, i.e. encrypting parts of files. Such an approach is useful for protecting parts of large files that typically
feature in novel applications focused on handling a large amount of scientific data, GIS, and XML data. We extend prior
work by implementing a user level file system on Linux, UsiFe, which supports fine grained encryption by extending the
popular ext2 file system. We further explore two paradigms in which the user is agnostic to encryption in the underlying
filesystem, and the user is aware that a file contains encrypted content. Popular file formats like XML, PDF, and
PostScript can leverage both of these models to form the basis of interactive applications that use fine grained access
control to selectively hide data. Lastly, we measure the performance of UsiFe, and observe that we can support file
access for partially encrypted files with less than 15% overhead.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Support Vector Machine (SVM) based on Structural Risk Minimization (SRM) of Statistical Learning Theory has
excellent performance in fault diagnosis. However, its training speed and diagnosis speed are relatively slow. Signed
Directed Graph (SDG) based on deep knowledge model has better completeness that is knowledge representation ability.
However, much quantitative information is not utilized in qualitative SDG model which often produces a false solution.
In order to speed up the training and diagnosis of SVM and improve the diagnostic resolution of SDG, SDG and SVM
are combined in this paper. Training samples' dimension of SVM is reduced to improve training speed and diagnosis
speed by the consistent path of SDG; the resolution of SDG is improved by good classification performance of SVM.
The Matlab simulation by Tennessee-Eastman Process (TEP) simulation system demonstrates the feasibility of the fault
diagnosis algorithm proposed in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ever since the concept of analog network coding(ANC) was put forward by S.Katti, much attention has been
focused on how to utilize analog network coding to take advantage of wireless interference, which used to be
considered generally harmful, to improve throughput performance. Previously, only the case of two nodes that need
to exchange information has been fully discussed while the issue of extending analog network coding to more
than three nodes remains undeveloped. In this paper, we propose a practical transmission scheme to extend
analog network coding to more than two nodes that need to exchange information among themselves. We start
with the case of three nodes that need to exchange information and demonstrate that through utilizing our algorithm,
the throughput can achieve 33% and 20% increase compared with that of traditional transmission scheduling
and digital network coding, respectively. Then, we generalize the algorithm so that it can fit for occasions with any
number of nodes. We also discuss some technical issues and throughput analysis as well as the bit error rate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There are usual complex causal relationships among faults to constitute complex system. Based on the introduction
of Cellular Automata (CA) and its evolution theory, fault-relating pattern analysis method based on CA was studied.
Application of CA for fault-relating pattern was proposed. Extended CA (ECA) and its algorithm were also proposed.
Simulation and analysis for evolvement of complex process of multi-fault relating chain were realized. The example
analysis of mechanical system fault-relating based on CA shows that the proposed ECA method breaks through limit of
spatial rules and homogeneity homogeneous of CA and provides a suitable analytical tool for researches on fault-relating
pattern. The preliminary study and explore results show that the method can effectively reveal the association patterns
among many fault in the system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The article focus on an application of chemical engineering. A fuzzy modeling methodology designed to determinate
two relevant characteristics of a chemical compound (ferrocenylsiloxane polyamide) for self-assembling - surface
tension and maximum UV absorbance measured as temperature and concentration functions. One of the most important
parts of a fuzzy rule-based inference system for the polyamide solution characteristics determinations is that it allows to
interpret the knowledge contained in the model and also to improve it with a-priori knowledge. The results obtained
through proposed method are highly accurate and its can be optimized by utilizing the available information during the
modeling process. The results showed that it is feasible in theory and reliable on calculation applying Mamdani fuzzy
inference system to the estimation of optical and surface properties of a polyamide solution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The OWL (Web Ontology Language) is the de facto standard ontology description language used by the Semantic
Web. Because OWL is mainly designed for use by applications that need to process the content of information, it is
difficult to read and understand by domain experts to build or verify domain ontologies expressed by OWL. ORM
(Object Role Modeling) is a conceptual modeling language with graphical notations, its models/schemas can be
translated into pseudo natural language that make it easier, also for domain experts who is a non-IT specialist, to create,
check and adapt the knowledge about the UoD (Universe of Domain). Based on formal logic analysis of OWL DL and
ORM and extending ORM notations, mapping rules has been presented to visualize OWL DL ontologies with ORM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Virtualization is one of the hottest research topics nowadays. Several academic researchers and developers from IT
industry are designing approaches for solving security and manageability issues of Virtual Machines (VMs) residing on
virtualized cloud infrastructures. Moving the application from a physical to a virtual platform increases the efficiency,
flexibility and reduces management cost as well as effort. Cloud computing is adopting the paradigm of virtualization,
using this technique, memory, CPU and computational power is provided to clients' VMs by utilizing the underlying
physical hardware. Beside these advantages there are few challenges faced by adopting virtualization such as
management of VMs and network traffic, unexpected additional cost and resource allocation. Virtual Machine Monitor
(VMM) or hypervisor is the tool used by cloud providers to manage the VMs on cloud. There are several heterogeneous
hypervisors provided by various vendors that include VMware, Hyper-V, Xen and Kernel Virtual Machine (KVM).
Considering the challenge of VM management, this paper describes several techniques to monitor and manage
virtualized cloud infrastructures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new face recognition method is proposed in this paper. The proposed method is based on fuzzy regularized linear
discriminant analysis (FR-LDA) and combines the regularized linear discriminant analysis (R-LDA) and the fuzzy set
theory. R-LDA is based on a new regularized Fisher's discriminant criterion, which is particularly robust against the
small sample size problem compared to the traditional one used in LDA. In the proposed method, we calculate the
membership degree matrix by Fuzzy K-nearest neighbor (FKNN) and then incorporate the membership degree into the
definition of the between-class and within-class scatter matrices and get the fuzzy between-class and within-class scatter
matrices. Experimental results obtained on the FERET database demonstrate that the proposed method improves the
classification rate performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a high precision and convenient extension simulation platform for satellite attitude and orbit system is
developed, to demonstrate the satellite attitude and orbit system for given space mission, and test the new
underdeveloped algorithms for attitude/orbit dynamics, attitude determination, orbit navigation, and attitude/orbit
control. The simulation platform is based on Matlab/Simulink software, using the technique of Simulink modeling,
importing C/Fortran code in Matlab/Simulink, and embedded Matlab function, with beautiful reusability, inheritability
and expansibility. The paper orderly presents the background behind the development of the platform, the platform
design architecture and capability, the validity of the platform, the inheritability and expansibility of the platform, the
platform implementation example for Chinese weather satellite (FY-3), and the future development for the platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A route detecting algorithm was proposed for the cucumber picking robot navigation in greenhouse environment.
Possible navigation route was determined from the additive value of the column scanning results. Color difference
between cucumber plants and the mid row soil was analyzed through combining the RGB elements segregations with
extra-green (ExG) value and extra-red (ExR) value. The obtained gray image was segmented with OTSU threshold
method, which made the gray-color abruptly changed pixels along the two sides of the central line identified. Mean value
of these selected pixels in each line was calculated to provide the discrete points along the navigation route. Finally least
square method was used to fit these discrete points to provide the navigation lines, from which, the route for current
navigation scheme can be determined. The proposed algorithm was applied to multiple images for its high speed and
anti-noise verification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays, photocopiers are commonly available as part of the reprinting or duplication of documents particularly
reference books. Therefore, it could be a major setback for the publishers and the authors to retail the books to the
readers. Since the field of photometry and colorimetry are advancing in terms of research, it is discovered that the
colours comprise of luminance values that are analogous to grayscale intensities ranging from white to black. By
exploiting the luminance from the colour properties, grayscale photocopying is greatly discouraged when the grayscale
printed copies or grayscale photocopies are scarcely visible of the contents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to improve the safety and reliability of capacitive equipment developed online monitoring system of
capacitive equipment dielectric loss. Introduce the structure and function of the software, based on the B/S skeleton, uses
the modular design, improve the readability and scalability. Detail the design of communication module, parameter
setting module, data acquisition and processing module, the user management module, database systems, etc. The entire
process is given. By testing the monitoring software work is stable, reliable, long-term continuous and effective
monitoring capacitive equipment various insulation data, can satisfy the requirements on site application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We use an onboard computer and a smart webcam built a practical real-time detection visual system. Installed in a
driving cabin in a specific way, it is able to determine the degree of closure of the driver's eyelids for further study of
detecting and mitigating driver drowsiness and fatigue. For a practical real-time application, both the temporal resolution
and the image resolution of video processing are key performances. Instead of the traditional video processing strategy
which detects face in the first frame and keep tracking in the following, a frame-by-frame detecting and locating strategy
is adopted, which firstly locates face region in a size-reduced image, secondly face region in original image is located by
linear mapping calculation, thirdly eye region is located in face region according to the characteristics of a facial
geometric model, this model allows personal calibration intuitively in run-time according to different individuals. A
novel combination of several practical methods proved effective in computer vision is proposed, including capturing and
resizing frames, face and eye localization, adaptive-threshold segmentation, mathematical morphology etc. Experiments
show that the proposed system achieves a performance of image resolution of 640×480 pixels, temporal resolution of 53
frames / second, average eye-blink detection accuracy of 96% under various illumination conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Music notations can be considered as very important information for musicians as well as for music fans. To recreate
music which was heard before one has to know the musical notes which were included in that music piece. From many
years computer scientists and engineers have tried to come up with various techniques to automate the task of finding out
musical notes from a music piece. With the advent of computer science many digital format arrived which facilitated
storing and encoding music information. But with the dynamic nature of any sound information still the problem of
analyzing those stays. Many statistical methods have been proposed in literature. But implementation specific detail is
very hard to find. With this paper we try to address that issue. At research lab, University of Moratuwa, we came up with
a system implemented in Java programming language which performs automatic music transcription on monophonic
music with a good accuracy level called Miyaesi.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Restoration techniques are applied to degraded face samples. The techniques considered are those of Wiener
Filtering, Lucy Richardson deconvolution, Blind deconvolution and Constrained least squares filtering (CLSF). Images
degraded by low blur, high blur and low blur with noise are experimented with and the results are expounded.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A flexible grinding technology based on the electric current control is presented to resolve the problem of low
rigidity of PCB during grinding, the thickness of which varies from 0.1mm up to 3.5 mm. The comparative results
between the real-time current and the setting current in the process of grinding control the frequency and the number of
servo pulse, and then the servo motor adjusts the grinding depth of brushing roller at several different rotational speeds,
namely, realizing the constant grinding force during grinding. The results show that the PCB can be grinded efficiently
and accurately by means of the flexible grinding technology based on the electric current control.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to improve the accuracy and precision of license plate location in license plate recognition system.The text
introduces a license plate location algorithm with an adaptive threshold, which can achieve the exact location of the
license plate based on an improved texture characteristic. The features of character segmentation are in the projection
threshold which is able to change according to different images. Finally BP neural network is used to identify each
character. Experimental results show that the method can achieve accurate license plate recognition and it has the
following advantages such as accurate positioning, robustness and so on.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In mechanical process, such as milling parts, cutting parts and coiling sheets, line and arc is two kinds of the most
common and most simple machining track. So the extraction of line and arc is of important significance in the practical
application. Therefore, in this paper, the research object is a continuous, uninterrupted and uncrossed curve. This paper
presents a method for identifying lines and arc based on line recognition algorithm of Freeman chain code, finally the
basic geometric feature of round and rectangle is extracted. Experimental results prove that this method is feasible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Text mining and ontology learning can be effectively employed to acquire the Chinese semantic information. This
paper explores a framework of semantic text mining based on ontology learning to find the potential semantic
knowledge from the immensity text information on the Internet. This framework consists of four parts: Data Acquisition,
Feature Extraction, Ontology Construction, and Text Knowledge Pattern Discovery. Then the framework is applied into
an actual case to try to find out the valuable information, and even to assist the consumers with selecting proper
products. The results show that this framework is reasonable and effective.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Supercritical Main Steam has a large inertia, delay and nonlinear and dynamic characteristics change with the operating conditions, it is difficult to establish the precise mathematical model, this algorithm based on RBF neural network GGAP posed a direct neural network predictive controller, the combination of online learning and control to a supercritical power plant main stream temperature as the research object, MATLAB simulation results show that the superheated steam temperature system can achieve effective control, performance than the conventional PID control has greatly improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The traditional PID control with RBF function the nerve network integration and made based on RBF the nerve
network the PID controller. the controller for the supercritical main system, the network for temperature control system
RBF an on-line to identify and build up the reference in the line and PID controller model for providing information and
controllers of the online study to their control, adjust the parameter from online, the performance indicators. MATLAB
simulation results show that the controller for temperature system has good control, not only keep track of good
performance and robustness better.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this regard, this research deals with a method for hybrid fuzzy least-squares regression. The extension of
symmetric triangular fuzzy coefficients to asymmetric trapezoidal fuzzy coefficients is considered as an effective
measure for removing unnecessary fuzziness of the linear fuzzy model. First, trapezoidal fuzzy variable is applied to
derive a bivariate regression model. In the following, normal equations are formulated to solve the four parts of hybrid
regression coefficients. Also the model is extended to multiple regression analysis. Eventually, method is compared with
Y-H.O. chang's model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The existence of outliers in a set of experimental data can cause incorrect interpretation of the fuzzy linear regression
results. This paper is to introduce some limitation on constraints of fuzzy linear regression models for determining fuzzy
parameters with outliers by value trapezoidal fuzzy data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Intelligent video surveillance has found a wide range of applications in public security. This paper describes the state-of-
the-art techniques in video surveillance system with single camera. This can serve as a starting point for building
practical video surveillance systems in developing regions, leveraging existing ubiquitous infrastructure. In addition, this
paper discusses the gap between existing technologies and the requirements in real-world scenario, and proposes
potential solutions to reduce this gap.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Marine engine simulator is broadly used in maritime education and training. Maritime education and training
institutions usually use this facility to cultivate the hands-on ability and fault-treat ability of marine engineers and
students. In this study, the structure and main function of DMS-2005 marine engine simulator is briefly introduced,
several teaching methods are discussed. By using Delphi method and AHP method, a comprehensive evaluation system
is built and the competence of marine engineers is assessed. After analyzing the calculating data, some conclusions can
be drawn: comprehensive evaluation system could be used to assess marine engineer's competence; the training of
marine engine simulator is propitious to enhance marine engineers' integrated ability, especially on the aspect of
judgment of abnormal situation capacity, emergency treatment ability and safe operation ability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
RFID middleware is regarded as the nerve of RFID systems. Firstly this paper analysis and discuss relatively mature RFID middleware, and propose a RFID middleware which based on service-oriented architecture. Then in this paper we introduce the CEP (complex event processing), and it is a new technology for event and data processing. With the widespread usage of RFID technology, complex event processing becomes a very important part in RFID application. Through introducing its work, we indicate the basis of introducing CEP to RFID middleware. We focus on the complex event processing technology in RFID data processing, and present a based on CEP data processing model, and study cache, Event Filter and Complex Event Builder key technologies. Moreover, a query language for detecting RFID complex events is also introduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to embody the reuse significance of software reuse framework fully, this paper will analyze in detail about
the single object model mentioned in the article "The overall design of software reuse framework" and induce them as
add and delete and modify mode, check mode, and search and scroll and display integrated mode. Three modes
correspond to their own interface design template, class and database design concept. The modelling idea helps
developers clear their minds and speed up. Even laymen can complete the development task easily.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper mainly expands the definition of Granular Sets which was proposed by predecessors. We know that we
often discuss and give some general topics that g is the subset of X( g is a granule and X is a given set).Conversely, this
paper would discuss the case that X is the subset of g. and define some concepts of lower and upper approximation of
X, and discuss some properties.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid development of computer networks, especially with the introduction of the Semantic Web
perspective, the problem of trust computation in the network has become an important research part of current computer
system theoretical. In this paper, according the information properties of the Semantic Web and interact between nodes,
the definition semantic trust as content trust of information and the node trust between the nodes of two parts. By
Calculate the content of the trust of information and the trust between nodes, then get the final credibility num of
information in semantic web. In this paper , we are improve the computation algorithm of the node trust .Finally,
stimulations and analyses show that the improved algorithm can effectively improve the trust of information more
accurately.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
ORM (Object Role Modeling), current version is 2.0, is a fully communication oriented information modeling
method. Currently, ORM has been used in ontology engineering to model domain ontologies. To ensure the semantics of
ORM model is consistent, it needs using reasoning engines to check semantic conflicts and redundancy. Furthermore,
only publish ORM domain ontologies on the Semantic Web described by OWL can it is shared by different applications.
Therefore, it needs to map ORM models into OWL DL. Several methods to transform ORM models have been
considered and a series of general OWL DL formalization rules have been proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
According to the characteristics and construction goals of distribution sales enterprises, the
intelligent sale management system design and development methods based on caller ID display
which take full advantage of the advanced telephone and network technologies. Based on the above
methods and technologies, the intelligent sale management system had been developed. Moreover,
in practical applications, it has achieved good results, and is a good complement for CTI and CRM
systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Survivable network refers to the system that can continue to fulfill the basic services set in advance when the system
faces some threats, such as external attacks and intrusions, interior errors and confusions and so on, until the system
backs to normal. The most significant feature of survivable network lies in that the key services of the system can keep
on running when the system is in a desperate environment. Therefore, in a survivable network, how to ensure the
nonstop running of the key services becomes very important. As a technology, apart from the basic features of Agent,
such as response, autonomy and target-oriented, Mobile Agent also has its own unique feature--mobility. Based on
the analysis of the survivability theory and the features of Mobile Agent, this paper proposes MESMMA (A Model for
Essential Services Mobility Based on Mobile Agent), a Mobile Agent mobility-based model used to ensure the nonstop
running of key services of survivable networks. Besides, this paper deeply discusses the structure design, running
process and various Mobile Agent functions of the MESMMA system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The high pressure air blowing ballast tanks are the most effective means of emergency to restore under flooded
submarine. As the process of blowing, the ballast compartment complexity of the flow of gas-liquid mixture, led to the
theoretical models have certain limitations.The paper simulates the dynamic process of high-pressure air blowing ballast
tanks by VOF two-phase flow model to analyze gas-liquid mixture phenomena,pressure changes, the ballast tank
drainage rate during the process of blowing ballast tank, which provides the basis to theoretical models for the
establishment of the high pressure air blowing ballast tanks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The double-stage Involute Cylindrical Gear assistant design transmission system was developed according to
Programmed Design process, uder the software environment of VB6.0. All the users need to do is to enter the relevant
parameters, then the system could judge whether the Cylindrical Gear transmission system is satisfactory in Fatigue
Strength and Bending Strength, thus the time of checking by handwork calculation can be saved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on the push - pull theory, this article show the empirical studies of tourists' travel motivation to Guangzhou
Ecological Park. We identified four push factors and three pull factors which are of the potential features of travel
motivation, and on this basis, using analysis of variance to further test the significant differences of push and pull factors
in the demographic characteristics of different groups. The conclusion has a certain reference value to the Ecological
Park in Guangzhou to attract tourists.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a method for the design of governor foot mechanism on high-speed printing press. According to
the technological conditions, an optimal design model is presented to achieve the minimum deviation between the
desired and actual track points. The physical dimensions of the linkages and the angular stroke of the driven cam can be
gained by solving the presented optimum model.Then the driven cam's profile is calculated. The effectiveness of the
proposed method is proved through an illustrative design example of a governor foot mechanism for high-speed
operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper analyzes the respective characteristics of VS2005 and MATLAB and their performances and introduces
several methods of mixed programming. Then, the paper discusses the advantages and great applications of texture
synthesis from sample (TSFS) and Image Quilting algorithm which is a typical algorithm of TSFS. Further, the paper
realizes the Image Quilting algorithm by using mixed programming between VS2005 and MATLAB2007a via COM.
We can perceive that mixed programming via COM that is used in developing texture synthesis program can not only
speeds up its efficiency and reliability, but also strengthen the convenience of operation and visualization. Finally, the
paper summarizes the relationship between texture synthesis parameters and synthesis effects from the experiment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with three-point boundary value problem for singular integro-differential equations at resonance in
Banach spaces. Using a specially constructed cone and fixed point theory of strict-set-contraction operators, we obtain
some new sufficient conditions of the existence and multiplicity of positive solution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Access Grid (AG) targets an advanced collaboration environment, with which multi-party group of people from
remote sites can collaborate over high-performance networks. However, current AG still employs VIC (Video
Conferencing Tool) to offer only pure video for remote communication, while most AG users expect to collaboratively
refer and manipulate the 3D geometric models of grid services' results in live videos of AG session. Augmented Reality
(AR) technique can overcome the deficiencies with its characteristics of combining virtual and real, real-time interaction
and 3D registration, so it is necessary for AG to utilize AR to better assist the advanced collaboration environment. This
paper introduces an effort to augment the AG by adding support for AR capability, which is encapsulated in the node
service infrastructure, named as Augmented Reality Service (ARS). The ARS can merge the 3D geometric models of
grid services' results and real video scene of AG into one AR environment, and provide the opportunity for distributed
AG users to interactively and collaboratively participate in the AR environment with better experience.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of computer technology and the increasing demand for mobile communications, the next
generation wireless networks will be composed of various wireless networks (e.g., WiMAX and WiFi). Vertical handoff
is a key technology of next generation wireless networks. During the vertical handoff procedure, handoff decision is a
crucial issue for an efficient mobility. Based on auto regression moving average (ARMA) prediction model, we propose
a vertical handoff decision algorithm, which aims to improve the performance of vertical handoff and avoid unnecessary
handoff. Based on the current received signal strength (RSS) and the previous RSS, the proposed approach adopt ARMA
model to predict the next RSS. And then according to the predicted RSS to determine whether trigger the link layer
triggering event and complete vertical handoff. The simulation results indicate that the proposed algorithm outperforms
the RSS-based scheme with a threshold in the performance of handoff and the number of handoff.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The unweighted networks only reflect the connection type between nodes and the network topology characteristics,
by contrast, the weighted networks could describe the strength of interaction between nodes. For this reason, the
weighted networks will give a better understanding of the characteristics of real-world networks and the complicated
features of complex systems. The evaluation of node importance is a very meaningful research in weighted complex
networks. This paper analyze the characteristics of weighted complex network and consider the effects of edge weights
for the evaluation of node importance, introduces the idea of data field in theoretical physics and establishes the
evaluation model of node importance in weighted complex network. Through the theoretical and experimental analysis,
it is proved that this method can evaluate the importance of node in weighted complex network in a fast and accurate
way, which is significant both to theory and practice.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The current selected features and profiles in a mobile device is usually set manually and do not use the external
environment automatically to change it. The features and profile can be switching to silent mode in office, selecting a
formal ringtone in office, describing the mood in the social network sites and a whole lot of similar things. These
features and profiles of the mobile is usually related to facial expression of the person (happiness, sadness, disgust,
anger, fear, and surprise), external appearance (different clothes in office and home) and surroundings of the person
(office, home, park etc).
The idea of the paper is to the use the above mentioned external events and changes the theme of the mobile phone.
The change may be changing the color of mobile screen, select particular category of word suggestion during writing
SMS, notes or document, select music playlist, call buddy with whom user would like to share stuff. Basically user will
have a mobile that can understand user's emotion and do accordingly. The paper presents details about emotion reading
and implementation using patter recognition technique to capture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, study optimization performance of damping semi-active viscose MR in order to reduce the seismic
vibrations of a building with 12 floors. For this purpose, we used genetic algorithms to obtain optimal number and
optimal situation in 15 major earthquakes with different frequency content through which final status damping are
determined. Investigating the influence of this type of damper on the construct dynamic response, the construct equation
has been written regarding dampers .Then, the aforesaid equation has been transmitted to the situational setting ,the
optimum quantity of each damper in the form of time function has been computed using optimization algorithm genetic
and, the construct response has been determined. The results show that different positions of optimal in the structure
height can have different effects on different responses. In a way that aligned damping in the upper parts of structures
although much reduced Roof floor shift but are less than the rate of decline in shear force base. It also aligned damping
in the bottom parts of structures are less than the rate of decline in base shear force roof and floor shift. Therefore, by
determining optimized position of optimal by genetic algorithms that can simultaneously optimize the rate of reduction
in the structure response.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
TC center location is important for weather forecast and TC analysis. However the appearance of TC centers has
different shapes and sizes at different time. At different stages of TC lifetime, the difficulty of locating TC center is
different. In order to improve the automatism and precision, we present a TC center location scheme for eye TCs and
non-eye TCs. Fisher discriminant is used to segment TC so that we can get the binary image automatically and
effectively. Since the cloud wall near the non-eye TC center is homocentric circle, Chan-Vese model is used to get TC
contour. Experimental results on TCs show that our scheme can achieve an average error within 0.3 degrees in
longitude/latitude in comparison with the best tracks by CMA and RSMC.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the industrial wireless network, determinacy, reliability and real-time applications have been bottleneck of the
network. The existed routing protocols of the wireless sensor network cannot be fully corresponded with the industrial
applications. In this paper, a routing algorithm based LQI (link quality indicator) and deterministic scheduling is proposed,
which is focus to solve the issue of high reliability. The test results in ISA100.11a DL subnet show the superiority
of the proposed algorithm, which has much more superiority to reduce the network overhead and save the network resources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Energy demand forecasting is a key issue for consumers and generators in all energy markets in the world. This paper
presents a new forecasting algorithm for daily gas demand prediction. This algorithm combines a wavelet transform and
forecasting models such as multi-layer perceptron (MLP), linear regression or GARCH. The proposed method is applied
to real data from the UK gas markets to evaluate their performance. The results show that the forecasting accuracy is
improved significantly by using the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The active and reactive power decoupling control for the double-fed induction generator wind turbines(DFIG) does
not play a positive role to the frequency response ability of power grid because it performs as the hidden inertia for the
power grid. If we want to improve the transient frequency stability of the wind turbine when it is integrated with the
system, we must ameliorate its frequency response characteristics. The inability of frequency control due to DFIG
decoupling control could be overcome through releasing (or absorbing) a part of the kinetic energy stored in the rotor, so
as to increase (or decrease) active power injected to the power system when the deviation of power system frequency
appears. This paper discusses the mathematical model of the variable speed DFIG, including the aerodynamic model,
pitch control system model, shaft model, generator model and inverter control model, and other key components,
focusing on the mathematical model of the converters in rotor side and grid side. Based on the existing model of wind
generator, the paper attaches the frequency control model on the platform of the simulation software
DIgSILENT/PowerFactory. The simulation results show that the proposed control strategy can response quickly to
transient frequency deviation and prove that wind farms can participate in the system frequency regulation to a certain
extent. Finally, the result verifies the accuracy and plausibility of the inverter control model which attaches the frequency
control module.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to prevent accidents and improve the safety level of the railway operations, based on the statistical analysis
of railway operation accidents at home and abroad over the years, the safety factors set of railway operation was built in
the paper by researching the evolution laws of human, equipment, environment and management in railway transport
system. And then, through analyzing the interaction relationships of safety factors from the systematic view, a network
which reflected the complex interaction relationships between the safety factors was established, and based on ISM
(Interpretative Structural Modeling) method, a hierarchical structure model of safety factors of railway operation in
which the complex interaction relationships were divided hierarchically as to explain the deep causes of the railway
accidents, then a comprehensive safety factors system of railway operations was established, in order to control and
prevent the railway operation accident, to lay the foundation for quantitative analysis and prediction of railway operation
accident, and to guide the scientific safety management of railway operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Group Technology is a method of increasing the productivity for manufacturing high quality products and improving
the flexibility of manufacturing systems in the many variety and small batch production. The parts with the same or
similar process route are gathered into a group through production flow analysis. Each part processing route was
analyzed. The production flow analysis figure of the parts was drawn. The fuzzy clustering algorithm was used to
classify the machine tools and the parts. The fuzzy similar matrices and the relay closure matrices were obtained based
on MATLAB. Then the corresponding relations between each group of the parts and each group of the machine tools
were found. The similar parts were gathered and the workshop machine layout was reconstructed. The steps of
production flow analysis and the fuzzy clustering analysis were introduced. The effectiveness of the method was proved
by an example.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The objective of the paper is to propose a safety enhancement model for application software in accelerating the
respective self checking strategies similar to bio enzymatic actions. The application software components which are
safety critical may have to be assessed periodically or on demand to achieve not only the functional correctness but also
the safety specifications or features while getting executed. The design and deployment of such software modules can be
formally verified for possible safety flaws using self checking capabilities and software enzymatic actions. The self
checks must sense the safety holes in the software and decide to activate the built-in software components called
enzymes to do the safe guard operations in a timely manner to mitigate the safety faults using the proposed enzyme
calculus. The various application hazards due to the boolean faults in the functional and behavioral model that lead to
software safety issues are considered in this approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Curve fitting is an important preliminary work for data compression and path interpolator in numerical control (NC).
The paper gives a simple conic spline approximation algorithm for G01 code. The algorithm is mainly formed by three
steps: divide the G01 code to subsets by discrete curvature detection, find the polygon line segment approximation for
each subset within a given error and finally, fit each polygon line segment approximation with a conic Bezier spline.
Naturally, B-spline curve can be obtained by proper knots selection. The algorithm is designed straightforward and
efficient without solving any global equation system or optimal problem. It is complete with the selection of curve's
weight. To design the curve more suitable for NC, we present an interval for the weight selection and the error is then
computed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
According to tradition empirical formula form, the cutting force regression model for turning Hastelloy X alloy is
assigned for the exponential form. Turning experiments were carried out based on the experimental plan designed on the
principle of the quadratic rotary combination design technique. By identifying regression coefficient using genetic
algorithm toolbox in MALAB7.1, cutting force regression models were established. The rule that the prediction models
reveal is identical with both visual analyses result of experiment data and tradition basic cutting theory, in addition the
residual error is smaller, so the regression models are fitted very well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recommendation system is considered a tool that can be used to recommend researchers about resources that are
suitable for their research of interest by using content-based filtering. In this paper, clustering algorithm as an
unsupervised learning is introduced for grouping objects based on their feature selection and similarities. The
information of publication in Science Cited Index is used to be dataset for clustering as a feature extraction in terms of
dimensionality reduction of these articles by comparing Latent Dirichlet Allocation (LDA), Principal Component
Analysis (PCA), and K-Mean to determine the best algorithm. In my experiment, the selected database consists of 2625
documents extraction extracted from SCI corpus from 2001 to 2009. Clustering into ranks as 50,100,200,250 is used to
consider and using F-Measure evaluate among them in three algorithms. The result of this paper showed that LDA
technique given the accuracy up to 95.5% which is the highest effective than any other clustering technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new method of infrared image generation for typical aircraft that based on GPU is proposed. Complete procedure
of infrared simulation including temperature model, radiation model, atmosphere model and image-forming system
model is built in this paper. All of these models are solved and rendered with GPU which has powerful calculating
ability. The result shows that dynamic infrared image of aircraft which varies with the status of aircraft could be
generated efficiently with this method. Considering that temperature field could not be calculated real-timely and the
unsatisfied simulation details in former infrared simulation methods, a new implementation method of calculating the
temperature field with vertex texture of GPU is proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Design knowledge of modern mechatronics product is based on information processing as the center of the
knowledge-intensive engineering, thus product design innovation is essentially the knowledge and information
processing innovation. Analysis of the role of mechatronics product design knowledge and information management
features, a unified model of XML-based product information processing method is proposed. Information processing
model of product design includes functional knowledge, structural knowledge and their relationships. For the expression
of product function element, product structure element, product mapping relationship between function and structure
based on the XML model are proposed. The information processing of a parallel friction roller is given as an example,
which demonstrates that this method is obviously helpful for knowledge-based design system and product innovation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper deals with the design of fuzzy tool for creating conceptual model under uncertainty. The paper identifies a
problem in current approach of creating conceptual model of information system and suggests new methodics and tool
for creating conceptual model, which can processes uncertain user requirements. The proposed tool uses a general model
of the decision support system, that works with the vague input values and IF-THEN fuzzy rules, and creates a list of
appropriate and acceptable solutions, and then allows to choose the best solution. In the proposed tool entities, attributes
and relations between entitites in the resulting conceptual model are selected via a decision support system. In the paper
we present six main parts of proposed tool, that generates suitable entities, attributes and relations between entities, then
generates them to XML format and finally visualizes the resulting conceptual model. The created conceptual model
illustrates the analysis of information system requirements. The proposed tool is shown on creating conceptual model of
the hotel information system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Port industries are the basic industries in the national economy. The industries have become the most modernized
departments in every country. The development of the port industry is not only advantageous to promote the optimizing
arrangement of social resources, but also to promote the growth of foreign trade volume through enhancing the
transportation functions. Return on equity (ROE) is a direct indicator related to the maximization of company's wealth. It
makes up the shortcomings of earnings per share (EPS). The aim of this paper is to prove the correlation between ROE
and other financial indicators by choosing the listed port companies as the research objectives and selecting the data of
these companies from 2000 to 2008 as empirical sample data with statistical analysis of the chartered figure and
coefficient. The detailed analysis method used in the paper is the combination of trend analysis, comparative analysis
and the ratio of the factor analysis method. This paper analyzes and compares all these factors and draws the conclusions
as follows: Firstly, ROE has a positive correlation with total assets turnover, main profit margin and fixed asset ratio,
while has a negative correlation with assets liabilities ratio, total assets growth rate and DOL. Secondly, main profit
margin has the greatest positive effect on ROE among all these factors. The second greatest factor is total assets
turnover, which shows the operation capacity is also an important indicator after the profitability. Thirdly, assets
liabilities ratio has the greatest negative effect on ROE among all these factors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Service-oriented software system is facing a challenge to regulate itself promptly because of the evolving Internet
environment and user requirements In this paper, a new way that describe the dynamic evolution of services according to
3C mode(Will 1990) is proposed, and Extended workflow net is utilized to describe the abstract model of dynamic
evolution of services from specific-functional-domain which is defined in this paper to the whole system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To solve the problem of data quality in data warehouse, the process of data stream in data warehouse and the factors
influencing data quality were analyzed; a method for controlling data quality based on quality grading was proposed and
the frame of data quality grading system was established. In each process of ETL, the data quality grading system would
execute script to calculate values, such as total, component and standard measures in accordance with the definition of
Unit Measure, so as to get representation indexes; and then the quality report would be generated according to the
indexes and the quality would be graded. The method is applicable to various data warehouse systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is a new access control problem that how to protect individual privacy included by electronic records and ensure
individual could control his privacy. Based on the Role Based Access Control (RBAC) model, the concept of electronic
badge (E-badge) is introduced and E-Badge-Based Access Control (EB-RBAC) Model is proposed then a formal
definition is defined. In the EB-RBAC model, person can set access policy of the privacy information. Users or roles
obtain authority to operate the objects, only when access to an object is granted by the policies defined in its attached ebadges.
These satisfy the design demands of the access control by this person can protect individual privacy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a design procedure for a Robust and Adaptive Fuzzy Logic based Power System Stabilizer
(RAFLPSS) to improve the small signal stability of Power System. The parameters of RAFLPSS are tuned by adaptive
neural network. This RAFLPSS uses ANFIS network (Adaptive Network based Fuzzy Inference System) which provides
a natural framework of multi-layered feed forward adaptive network using fuzzy logic inference system. In this approach,
the hybrid-learning algorithm tunes the fuzzy rules and the membership functions of the RAFLPSS. The dynamic
performance of SMIB system with the proposed RAFLPSS under different operating conditions and change in system
parameters has been investigated. The simulation results obtained from the conventional PSS (CPSS) and Fuzzy logic
based PSS (FPSS) are compared with the proposed RAFLPSS. The simulation results demonstrate that the proposed
RAFLPSS performs well in damping and quicker response when compared with the other two PSSs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Distributed Embedded Systems have significant applications in automobile industry as steer-by-wire, fly-by-wire and
brake-by-wire systems. In this paper, we provide a general framework for fault detection in a distributed embedded real
time system. We use heartbeat monitoring, check pointing and model based redundancy to design a scalable framework
that takes care of task scheduling, temperature control and diagnosis of faulty nodes in a distributed embedded system.
This helps in diagnosis and shutting down of faulty actuators before the system becomes unsafe. The framework is
designed and tested using a new simulation model consisting of virtual nodes working on a message passing system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The construction industry has been identified as one of the most hazardous industries. The risk of constructionrelated
workers is far greater than that in a manufacturing based industry. However, some steps can be taken to reduce
worker risk through effective injury prevention strategies. In this article, k-means clustering methodology is employed in
specifying the factors related to different worker types and in identifying the patterns of industrial occupational
accidents. Accident reports during the period 1998 to 2008 are extracted from case reports of the Northern Region
Inspection Office of the Council of Labor Affairs of Taiwan. The results show that the cluster analysis can indicate some
patterns of occupational injuries in the construction industry. Inspection plans should be proposed according to the type
of construction-related workers. The findings provide a direction for more effective inspection strategies and injury
prevention programs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Micro-blog websites are increasingly attracting the attention of academic and industry researchers. Its market value
has been widely concerned and the exploration of its monetization model has entered a substantial stage. In this paper,
we describe the history of Micro-blog and propose a comprehensive classification. We then discuss key characteristics
based on existing websites mainly in China. A monetization model is concluded according to different features of microblog
and the Hype Cycle theory[1]. The paper can be used for future academic research or making strategies for Microblog
website practitioners.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fuzzy logic controller is a controller for designing the challenging nonlinear control systems by If-Then laws that is
like human intelligence and it increase the accuracy of the control action .This paper present a success control function
using a Fuzzy System approach which is to control the Ball-Beam balance system, throughout modeling, simulation, and
implementation. First we applied fuzzy logic for system which means for the outer loop a fuzzy logic controller is
designed and for the inner loop of a ball and beam system a PD controller is implemented. We design a traditional PID
controller and pole placement controller for the beam position in order to compare the performance of these three types
of controllers; thus FLC found to give better transient and steady state results and there is less overshoot in compare with
classical PID and pole placement controller. Simulation results are presented to show the better performance of the ball
and beam using these controllers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In This paper the previous Li-Ion battery charger techniques are reviewed and compared and the new fuzzy logic
battery charging method which is proposed to optimize and improve the battery charger efficiently. According to results
of comparison, using the fuzzy control charging system can shorten the charging time with higher efficiency and lower
temperature rise. Additionally, we have used optimal Li-ion battery charging frequency by using AC impedance
technique which means if the battery is charged by the optimal charging frequency fZmin, that obtain from Bode Plot of
the Li-ion battery, the charging time and charging efficiency will improve. Thus using the switching frequency (fZmin)
of the battery charger and the fuzzy logic control on the same system can optimize the performance on the charging
process. According to the experimental results, the proposed charger can charge the Li-ion batteries with higher
efficiency 97.16%, lower temperature rise1.513degree celosias, fast charging period around 50.43 minute and long life
cycle. The results in this paper are presented by using MATLAB and dsPIC30F2020 is used as controller applying
designed fuzzy logic inside.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Short span notifications or acknowledgment of events plays a major criterion in our world. Henceforth this module
can be rigged up with any system in order to control the activity and acknowledge the user at any preconfigured
parameter. Approximated that toxic chemicals are mixed every 14.7 seconds Over 2 million Aquatic Creatures are
affected every year. About 65% of reported water pollution resulted from Tanneries and Chemical factories. The pH-COP
bestows the solution for these catastrophic events. This module being robust can be attached to any system viz.
Voltage monitor, Security system, Fluid level/ pressure controller, pH sensor cum controller, etc. The pH-COP primarily
comprises of a microcontroller and a DTMF transceiver. This control systems which is efficient enough to be interfaced
with the TELE-ALERT MODULE or the pH-COP.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the continuous improvement of mobile performance, mobile entertainment applications market trend has been
increasingly clear, mobile entertainment applications will be after the PC entertainment applications is another important
business growth. Through the full analysis of the current mobile entertainment applications market demand and trends,
the author has accumulated a lot of theoretical knowledge and practical experience. Rational, using of some new
technology for a mobile entertainment games design, and described the development of key technologies required for
mobile game an analysis and design of the game, and to achieve a complete game development. Light of the specific
mobile game project - "Battle City", detailed the development of a mobile game based on the J2ME platform, the basic
steps and the various key elements, focusing on how to use object-oriented thinking on the role of mobile phones in the
abstract and Game Animation package, the source code with specific instructions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Electro Discharge Machine (EDM) is the commonest untraditional method of production for forming metals and the
Non-Oxide ceramics. The increase of smoothness, the increase of the remove of filings, and also the decrease of
proportional erosion tool has an important role in this machining. That is directly related to the choosing of input
parameters.The complicated and non-linear nature of EDM has made the process impossible with usual and classic
method. So far, some methods have been used based on intelligence to optimize this process. At the top of them we can
mention artificial neural network that has modelled the process as a black box. The problem of this kind of machining is
seen when a workpiece is composited of the collection of carbon-based materials such as silicon carbide. In this article,
besides using the new method of mono-pulse technical of EDM, we design a fuzzy neural network and model it. Then
the genetic algorithm is used to find the optimal inputs of machine. In our research, workpiece is a Non-Oxide metal
called silicon carbide. That makes the control process more difficult. At last, the results are compared with the previous
methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Modern distribution systems use replication to improve communication delay experienced by their clients. Some
techniques have been developed for web server replica placement. One of the previous studies was Greedy algorithm
proposed by Qiu et al, that needs knowledge about network topology. In This paper, first we introduce a genetic
algorithm for web server replica placement. Second, we compare our algorithm with Greedy algorithm proposed by Qiu
et al, and Optimum algorithm. We found that our approach can achieve better results than Greedy algorithm
proposed by Qiu et al but it's computational time is more than Greedy algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The main focus of this paper is to design a Fuzzy based control algorithm to realize an improvement in the input
power factor of a multiple connected AC-DC converter fed drive system. It incorporates the role of fuzzy inference
principles to generate appropriate PWM pulses for the power switches at the second stage of the power module. The
philosophy is developed, with a view to reshape the input current phasor and enables it to align with the supply voltage
wave in the perspective of improving the input power factor. The closed loop scheme evaluated using MATLAB based
simulation exhibits an enhancement in supply power factor over a range of operating loads in addition to illustrating the
speed regulating capability of the drive.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The proliferation of unstructured data continues to grow within organizations of all types. This data growth has
introduced the key question of how we effectively find and manage them in the growing sea of information. As a result,
there has been an increasing demand for efficient search on them. Providing effective indexing and search on
unstructured data is not a simple task. Unstructured data include documents, images, audio, video and so on. In this
paper, we propose an efficient indexing and searching framework for unstructured data. In this framework, text-based
and content-based approaches are incorporated for unstructured data retrieval. Our retrieval framework can support
various types of queries and can accept multimedia examples and metadata-based documents. The aim of this paper is to
use various features of multimedia data and to make content-based multimedia retrieval system more efficient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The decomposition algorithm is the one of the method that have be applied for converting large scale problem into one or more smaller problem. In the condition of uncertainty, one way is using fuzzy linear programming(FLP) to solve this algorithm . For solving FLP problems we can use ranking functions of fuzzy numbers. in this paper, we use a new ranking function suggested by Hajarri[11], and propose a method for solving fuzzy revised simplex algorithm and then apply this algorithm to solve fuzzy decomposition algorithm, in the case of bounded space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, genetic algorithm was used to solve job shop scheduling problems. One example discussed in JSSP (Job
Shop Scheduling Problem) and I described how we can solve such these problems by genetic algorithm. The goal in
JSSP is to gain the shortest process time. Furthermore I proposed a method to obtain best performance on performing all
jobs in shortest time. The method mainly, is according to Genetic algorithm (GA) and crossing over between parents
always follows the rule which the longest process is at the first in the job queue. In the other word chromosomes is
suggested to sorts based on the longest processes to shortest i.e. "longest job first" says firstly look which machine
contains most processing time during its performing all its jobs and that is the bottleneck. Secondly, start sort those jobs
which are belonging to that specific machine descending. Based on the achieved results," longest jobs first" is the
optimized status in job shop scheduling problems. In our results the accuracy would grow up to 94.7% for total
processing time and the method improved 4% the accuracy of performing all jobs in the presented example.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe the requirements and architecture for decision making in the context of a home safety system. The
architecture is capable of local- and network supported decision making. To develop and test the architecture in practice
we define a use case of an elderly man in his daily chores. We also discuss the ethical concerns posed by such a system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Existing data storage format can not meet the needs of information analysis, data warehouse onto the historical stage,
the data warehouse is to support business decision making and the creation of specially designed data collection. With
the data warehouse, the companies will all collected information is stored in the data warehouse. The data warehouse is
organized according to some, making information easy to access and has value. This paper focuses on the establishment
of data warehouse and analysis, design, data warehouse, two barrier models, and compares them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The design and implementation of Real-Time Database System is a very complicated issue, a simulator for real-time
algorithm research and evaluation is helpful. In this paper, we propose a simulator named AgiSim for real-time
transaction scheduling and concurrent control, through which researchers can test, simulate and evaluate many real-time
transaction scheduling and concurrent control algorithms. AgiSim is composed of transaction generation component,
transaction scheduling component and data management component. The simulator is designed with UML, a state
diagram and a sequence diagram of real-time transaction scheduling are given. Two mixed transaction schedulers are
designed and simulated in AgiSim.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Support vector machine (SVM) was initially used for binary classification. How to generalize the result of two-class
classification to multi-class classification has been a problem which needs to be more investigated and studied. A general
overview of existing representative methods for multi-category support vector machines was presented and their
performances were compared in the paper. Then, the algorithms were applied in the Chinese question classification.
Chinese question classification hierarchy and the feature selection of the question were also discussed in the paper. Then,
The four algorithms of SVM multi-category classification were applied to Chinese question classification and some
contrast experiments were done. The result of the experiments has shown that the binary-tree algorithm is more effective
than the other algorithms in the Chinese question classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Time dependent performance degradation due to negative bias thermal instability (NBTI) is one of the most
important reliability concerns for deep nano scale regime VLSI circuits. Under worst case NBTI can degrade circuit
performance in terms of timing by more than 10% over a period of 3 years. Reliability decisions taken early in system
design can bring significant benefits in terms of design quality. The proposed approach deals with use of latches and
NBTI aware scheduling of DFG to overcome the timing violations caused by NBTI. The experimental results suggest
that it incurs a very low area overhead and no performance penalty. We propose an algorithm to schedule the DFG and
show results for some of the common filters like EW and AR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.