Current clinical image quality assessment techniques mainly analyze image quality for the imaging system in terms of
factors such as the capture system DQE and MTF, the exposure technique, and the particular image processing method
and processing parameters. However, when assessing a clinical image, radiologists seldom refer to these factors, but
rather examine several specific regions of the image to see whether the image is suitable for diagnosis. In this work, we
developed a new strategy to learn and simulate radiologists' evaluation process on actual clinical chest images. Based on
this strategy, a preliminary study was conducted on 254 digital chest radiographs (38 AP without grids, 35 AP with 6:1
ratio grids and 151 PA with 10:1 ratio grids). First, ten regional based perceptual qualities were summarized through an
observer study. Each quality was characterized in terms of a physical quantity measured from the image, and as a first
step, the three physical quantities in lung region were then implemented algorithmically. A pilot observer study was
performed to verify the correlation between image perceptual qualities and physical quantitative qualities. The results
demonstrated that our regional based metrics have promising performance for grading perceptual properties of chest
radiographs.
KEYWORDS: Radiography, Medical imaging, Image retrieval, Digital imaging, Picture Archiving and Communication System, Image storage, Image processing, Feature extraction, Principal component analysis, Image quality
Before a radiographic image is sent to a picture archiving and communications system (PACS), its projection
information needs to be correctly identified at capture modalities to facilitate image archive and retrieval. Currently,
annotating radiographic images is manually performed by technologists. It is labor intensive and cost ineffective.
Moreover, man-made annotation errors occur frequently during image acquisition. To address this issue, an automatic
image recognition method is developed. It first extracts a set of visual features from the most indicative region in a
radiograph for image recognition, and then uses a family of classifiers, each of which is trained for a specific projection
to determine the most appropriate projection for the image. The method has been tested on a large number of clinical
images and has shown excellent robustness and efficiency.
Image blur introduced by patient motion is one of the most frequently cited reasons for image rejection in radiographic diagnostic imaging. The goal of the present work is to provide an automated method for the detection of anatomical motion blur in digital radiographic images to help improve image quality and facilitate workflow in the radiology department. To achieve this goal, the method first reorients the image to a predetermined hanging protocol. Then it locates the primary anatomy in the radiograph and extracts the most indicative region for motion blur, i.e., the region of interest (ROI). The third step computes a set of motion-sensitive features from the extracted ROI. Finally, the extracted features are evaluated by using a classifier that has been trained to detect motion blur. Preliminary experiments show promising results with 86% detection sensitivity, 72% specificity, and an overall accuracy of 76%.
In picture archiving and communications systems (PACS), images need to be displayed in standardized ways for
radiologists' interpretations. However, for most radiographs acquired by computed radiography (CR), digital
radiography (DR), or digitized films, the image orientation is undetermined because of the variation of examination
conditions and patient situations. To address this problem, an automatic orientation correction method is presented. It
first detects the most indicative region for orientation in a radiograph, and then extracts a set of low-level visual features
sensitive to rotation from the region. Based on these features, a trained classifier based on a support vector machine is
employed to recognize the correct orientation of the radiograph and reorient it to a desired position. A large-scale
experiment has been conducted on more than 12,000 radiographs covering a large variety of body parts and projections
to validate the method. The overall performance is quite promising, with the success rate of orientation correction
reaching 95.2%.
We have developed an active shape model (ASM)-based segmentation scheme that uses the original Cootes et al. formulation for the underlying mechanics of the ASM but improves the model by fixating selected nodes at specific structural boundaries called transitional landmarks. Transitional landmarks identify the change from one boundary type (such as lung-field/heart) to another (lung-field/diaphragm). This results in a multi-segmented lung-field boundary where each segment correlates to a specific boundary type (lung-field/heart, lung-field/aorta, lung-field/rib-cage, etc.). The node-specified ASM is built using a fixed set of equally spaced feature nodes for each boundary segment. This allows the nodes to learn local appearance models for a specific boundary type, rather than generalizing over multiple boundary types, which results in a marked improvement in boundary accuracy. In contrast, existing lung-field segmentation algorithms based only on ASM simply space the nodes equally along the entire boundary without specification. We have performed extensive experiments using multiple datasets (public and private) and compared the performance of the proposed scheme with other contour-based methods. Overall, the improved accuracy is 3-5 &percent; over the standard ASM and, more importantly, it corresponds to increased alignment with salient anatomical structures. Furthermore, the automatically generated lung-field masks lead to the same fROC for lung-nodule detection as hand-drawn lung-field masks. The accurate landmarks can be easily used for detecting other structures in the lung field. Based on the related landmarks (mediastinum-heart transition, heart-diaphragm transition), we have extended the work to heart segmentation.
Chest radiography is one of the most widely used techniques in diagnostic imaging. It makes up at least one third of all conventional diagnostic radiographic procedures in hospitals. However, in both film-screen and computed radiography, images are often digitized with the view and orientation unknown or mislabeled, which causes inefficiency in displaying them in the picture archive and communication system (PACS). Hence, the goal of this work is to provide a robust, efficient, and automatic hanging protocol for chest radiographs. To achieve it, the method star ts with recognition by extracting a set of distinctive features from chest radiographs. Next, a well-defined probabilistic classifier is used to train and classify the radiographs. Identifying the orientation of the radiographs is performed by an efficient algorithm which locates the neck, heart, and abdomen positions in radiographs. The initial experiment was performed on radiographs collected from daily routine chest exams in hospitals, and it has shown promising results.
Processing optimization of digital radiographs requires the knowledge of the location and characteristics of both
diagnostically relevant and irrelevant image regions. An algorithm has been developed that can automatically detect
and extract foreground, background, and anatomy regions from a digital radiograph. This algorithm is independent of
exam-type information and can deal with multiple-exposed computed radiography (CR) images. First, the image is subsampled,
and the processing is done on the sub-sampled image to improve subsequent processing efficiency and reduce
algorithm dependency on image noise and detector characteristics. Second, an initial background is detected using
adaptive thresholding on the cumulative histogram of significant transition pixels and an iterative process, based on
background variance. Third, foreground detection is conducted by: (1) classifying all significant transitions using a
smart-edge detection, (2) delineating all lines that are possible collimation blades using Hough transform, (3) finding
candidate partition blade pairs if the image has several radiation fields, (4) partitioning the image into sub-images
containing only one radiation field using a divide-and-conquer process, and (5) identifying the best collimation for each
sub-image from a tree-structured hypothesis list. Fourth, the background is regenerated using a region-growing process
from identified background “seeds.” Fifth, the background and foreground regions are merged and removed; the rest of
the image is labeled and those large, connected regions are identified as anatomy regions. The algorithm has been
trained and tested separately with two image sets from a wide variety of exam types. Each set consists of more than
2700 CR images acquired with KODAK DIRECTVIEW CR 800 Systems. The overall success rate in detecting both
foreground and background is 97%.
In this paper, a novel method is provided for automatic generation of landmarks to construct statistical shape models. The method generates a sparse polygonal approximation for each shape example in the training set and then automatically aligns the shape polygons by minimizing the L2 distance of the turning functions of their polygonal approximations. The turning function measures the angle of counterclockwise tangent as a function of the arc-length and is especially suitable for shape alignment since it is piecewise constant for a polygon, and invariant under translation, rotation and scaling of the polygon. Based on the minimal L2 distance, a shape classifier is used to remove the shapes very different from the training set to prevent undesirable distortion of the mean shape. For some shapes with non-rigid deformation, such as hands, a local alignment is performed by using a visual part decomposition scheme and a partial match algorithm. Finally, a set of salient match pairs are detected and used to generate the landmarks. This method has been successfully applied to various anatomical structures. As expected, a large portion of shape variability is captured.
With the advent of Computer Radiographs(CR) and Digital Radiographs(DR), image understanding and classification in medical image databases have attracted considerable attention. In this paper, we propose a knowledge-based image understanding and classification system for medical image databases. An object-oriented knowledge model has been introduced and the idea that content features of medical images must hierarchically match to the related knowledge model is used. As a result of finding the best match model, the input image can be classified. The implementation of the system includes three stages. The first stage focuses on the match of the coarse pattern of the model class and has three steps: image preprocessing, feature extraction, and neural network classification. Once the coarse shape classification is done, a small set of plausible model candidates are then employed for a detailed match in the second stage. Its match outputs imply the result models might be contained in the processed images. Finally, an evaluation strategy is used to further confirm the results. The performance of the system has been tested on different types of digital radiographs, including pelvis, ankle, elbow and etc. The experimental results suggest that the system prototype is applicable and robust, and the accuracy of the system is near 70% in our image databases.
The object-oriented knowledge representation is considered as a natural and effective approach. Nevertheless, the use of object-oriented within complex image analysis has not undergone a rapid growth as it happened in other fields. We argue that one of the major problems comes from the difficulty of conceiving a comprehensive framework for coping with the different abstraction levels and the vision task operations. With the goal to overcome such a drawback, we present a new knowledge model for medical image content analysis based on the object-oriented paradigm. The new model abstracts common model for medical image content analysis based on the object-oriented paradigm. The new model abstracts common properties from different types of medical images by using three attribute parts: description, component, and semantic graph, and also specifies its actions to schedule the detection procedure, properly deform the shape of model components to match the corresponding anatomies in images, select the best match candidates, and verify combination graphs from detected candidates with the semantic graph defined in the model. The performance of the proposed model has been tested on pelvis digital radiographs. Initial results are encouraging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.