Lung cancer has caused the most cancer deaths in the past several years. Benign–malignant lung nodule classification is vital in lung nodule detection, which can help early diagnosis of lung cancer. Most existing works extract the features of chest CT images using the well-designed networks, which require substantial effort of experts. To automate the manual process of network design, we propose an attention-guided differentiable architecture search (A-DARTS) method, which directly searches for the optimal network on chest CT images. In addition, A-DARTS utilizes an attention mechanism to alleviate the effect of the initialization-sensitive nature of the searched network while enhancing the feature presentation ability. Extensive experiments on the Lung Image Database Consortium image collection (LIDC-IDRI) benchmark dataset show that the proposed method achieves a lung nodule classification accuracy of 92.93%, which is superior to the state-of-the-art methods.
Partial occlusion is one of the most challenging difficulties for object tracking. In this paper, we present an
approach to address this problem by using an effective appearance model which has two innovations. First, in
contrast to widely used color histogram that models the appearance of an object using only color information,
we assert that both color and texture are important cues for tracking, especially in the presence of complex
background. We thus propose a novel local descriptor, named local color texture pattern (LCTP), to model
the appearance of the object with color and texture information simultaneously. Second, global color histogram
completely ignores the spatial layout information of an object and are sensitive to partial occlusion. In this
work, we overcome this limitation based on a block-dividing way: 1) divide target into multiple blocks and then
represent each block with LCTP histogram, 2) with a selectivity strategy, we select blocks that are not occluded
and then combine similarities of those selected blocks to obtain final similarity measure. Experimental results
demonstrate that the proposed method is more robust to partial occlusion than two state-of-the-art algorithms.
Object tracking plays a very important role in many computer vision applications. However its performance
will significantly deteriorate due to some challenges in complex scene, such as pose and illumination changes,
clustering background and so on. In this paper, we propose a robust object tracking algorithm which exploits
both global color and local scale invariant (SIFT) features in a particle filter framework. Due to the expensive
computation cost of SIFT features, the proposed tracker adopts a speed-up variation of SIFT, SURF, to extract
local features. Specially, the proposed method first finds matching points between the target model and target
candidate, than the weight of the corresponding particle based on scale invariant features is computed as the the
proportion of matching points of that particle to matching points of all particles, finally the weight of the particle
is obtained by combining weights of color and SURF features with a probabilistic way. The experimental results
on a variety of challenging videos verify that the proposed method is robust to pose and illumination changes
and is significantly superior to the standard particle filter tracker and the mean shift tracker.
In this paper, we propose a novel and robust object tracking algorithm based on sparse representation. Object
tracking is formulated as a object recognition problem rather than a traditional search problem. All target
candidates are considered as training samples and the target template is represented as a linear combination of
all training samples. The combination coefficients are obtained by solving for the minimum l1-norm solution.
The final tracking result is the target candidate associated with the non-zero coefficient. Experimental results
on two challenging test sequences show that the proposed method is more effective than the widely used mean
shift tracker.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.