A 3D reality world used for medical therapies needs clear images without defective and noisy pixels. This is essential for portative equipments used by several patients inside the same building. To reduce the resources of the system, a corrective and denoising implementation has been implemented based on FPGA. Two low resources algorithms are proposed for defective pixel correction and noise reduction. The correction algorithms are integrated according to ”Bayer Color Filter Array” for stereoscopic video system respecting real-time constraints. State-of-the-art denoising algorithm shows that implementations use too many resources for the target system, therefore a new low resource denoising algorithm is presented. It is derived from the efficient Non-Local Means (NLM) denoising algorithm which is hardly implementable in a FPGA system. The new denoising method discussed in this paper does not use any multiplier and only consumes 3.8k ALUTs.
Nowadays, high-speed imaging offers high investigation possibilities for a wide variety of applications such as motion
study, manufacturing developments. Moreover, due to the electronic progresses, real-time processing can be
implemented in the high-speed acquisition systems. Important information can be extracted in real-time from the image
and then be used for on-line controls. Therefore we have developed a high-speed smart camera with high-speed CMOS
sensor, typically 500 fps with a 1.3 Mega-pixels resolution. Different specific processing have been implemented inside
an embedded FPGA according to the high-speed data-flow. The processing are mainly dedicated to feature extraction
such as edge detection, or image analysis, and finally markers extraction and profilometry. In any case, the data
processing allows to reduce the large data flow (6.55 Gbps) and to propose a transfer on a simple serial output link as
USB 2.0. This paper presents the high-speed smart camera and focuses two processing implementations: the marker
extraction and the related profilometry measurement. In the marker extraction mode, the center of mass is determined for
each marker by a combination of image filtering. Only the position of the center is transferred via the USB 2.0 link. For
profilometry measurements, a simplify algorithm has been implemented at low-cost in term of hardware resources. The
positions of the markers or the different object's profiles can be determined in real-time at 500 fps with full resolution
image. A higher image rate can be reached with a lower resolution (i.e. 500 000 profiles for a single row image).
This work describes an image acquisition and processing system based on a new co-processor architecture designed
for CMOS sensor imaging. The platform permits to configure a wide variety of acquisition modes (random
region acquisition, variable image size, multi-exposition image) as well as high-performance image pre-processing
(filtering, de-noising, binarisation, pattern recognition). Furthermore, the acquisition is driven by an FPGA, as
well as a processing stage followed by a Nexperia processor. The data transfer, from the FPGAs board to the
Nexperia processor, can be pipelined to the co-processor to increase achievable throughput performances. The co-processor
architecture has been designed so as to obtain a unit that can be configured on the fly, in terms of type
and number of chained processing (up to 8 successive pre-defined pre-processing), during the image acquisition
process that is dynamically defined by the application. Examples of acquisition and processing performances are
reported and compared to classical image acquisition systems based on standard modular PC platforms. The
experimental results show a considerable increase of the performances. For instance the reading of bar codes
with applications to postal sorting on a PC platform is limited to about 15 images (letters) per second. The new
platform beside resulting more compact and easily installable in hostile environments can successfully analyze
up to 50 images/s.
An architecture for fast video object recognition is proposed. This architecture is based on an approximation of featureextraction
function: Zernike moments and an approximation of a classification framework: Support Vector Machines
(SVM). We review the principles of the moment-based method and the principles of the approximation method:
dithering. We evaluate the performances of two moment-based methods: Hu invariants and Zernike moments. We
evaluate the implementation cost of the best method. We review the principles of classification method and present the
combination algorithm which consists in rejecting ambiguities in the learning set using SVM decision, before using the
learning step of the hyperrectangles-based method. We present result obtained on a standard database: COIL-100. The results are evaluated regarding hardware cost as well as classification performances.
High-speed video cameras are powerful tools for investigating for instance the dynamics of fluids or the movements of mechanical parts in manufacturing processes. In the past years, the use of CMOS sensors instead of CCDs have made possible the development of high-speed video cameras offering digital outputs, readout flexibility and lower manufacturing costs. In this field, we designed a new fast CMOS camera with a 1280×1024 pixels resolution at 500 fps. In order to transmit from the camera only useful information from the fast images, we studied some specific algorithms like edge detection, wavelet analysis, image compression and object tracking. These image processing algorithms have been implemented into a FPGA embedded inside the camera. This FPGA technology allows us to process fast images in real time.
High-speed video cameras are powerful tools for investigating, for instance, fluid dynamics or the movements of mechanical parts in manufacturing processes. In the past 5 years the use of CMOS sensors instead of CCDs has facilited the development of high-speed video cameras offering digital outputs, readout flexibility, and lower manufacturing costs. Still the huge data flow provided by the sensor cannot be easily transferred or processed and thus must generally be stored temporarily in fast local RAM. Since this RAM is size limited, the recording time in the camera is only a few seconds long. We tried to develop an alternative solution that would allow continuous recording. We developed a real-time image compression in order to reduce the data flow. We tested three algorithms: run-length encoding, block coding, and compression using wavelets. These compression algorithms have been implemented into a FPGA Virtex II-1000 and allow real-time compression factors between 5 and 10 with a PSNR greater than 35dB. This compression factor allowed us to link a new high-speed CMOS video camera with a PC using a single USB2 connection. The full flow of 500 fps in 1280x1024 format is transferred to the computer in real-time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.