PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 7249, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new breed of CMOS color sensor called MonoColor sensor is developed for a barcode reading
application in AIDC industry. The RGBW color filter array (CFA) in a MonoColor sensor is arranged in a
8 x 8 pixels CFA with only 4 pixels of them are color (RGB) pixels and the rest of 60 pixels are transparent
or monochrome. Since the majority of pixels are monochrome, MonoColor sensor maintains 98% barcode
decode performance compared with a pure monochrome CMOS sensor. With the help of monochrome and
color pixel fusion technique, the resulting color pictures have similar color quality in terms of Color
Semantic Error (CSE) compared with a Bayer pattern (RGB) CMOS color camera. Since monochrome
pixels are more sensitive than color pixels, a MonoColor sensor produces in general about 2X brighter
color picture and higher luminance pixel resolution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Photogate APS pixels use a MOS capacitor created potential well to capture photocarriers. However, optical
absorption of the poly-silicon gate reduces photon transmission. We investigate multi-fingered photogates with
openings in the gate to increase photon collection. 0.18 μm CMOS standard and multi-fingered photogates were
implemented where the enclosed detection area is divided by 1, 3 and 5 poly fingers. Preliminary response comparison
with standard photogates suggested the sensitivity of 1-finger pixels dropped ~22% implying open areas collected 62%
of the photocarriers. The sensitivity of 3 and 5 finger pixels increased ~33 - 49% over standard, with open area
collection ~170 - 290% more photocarriers due to fringing field created potential wells. These results indicated at least 66% of the incident light is absorbed by the poly-silicon gate. In spectral response multi-fingered pixels showed an increase in sensitivity in the red (631 nm) - yellow (587 nm) - green (571 nm) wavelengths but a relative decline in the blue (470 nm) possibly due to more absorption in the Silicon Nitride insulator layers. Some Silicon Nitride (SixNy) compositions have higher absorption coefficients in the Blue than poly-silicon and thus may dominate the absorption in these photogates structures. Extended analysis on the potential well formation in the multi-fingered photogates was perform using 2-D device simulation. Simulated multi-fingered photogates showed the strength of the fringing field increased as the open area spacing between poly-fingered decreases; with the 5-finger having a nearly uniform depleted region over the entire photogate area.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As pixels shrink in CMOS detectors, scaling effects could lead to potential issues with existing colored filters because of
their thickness (approx. 1μm).
In this paper, we propose to investigate a new generation of filters that are potentially thinner by approximately a decade.
Several years ago [1], nanometric metallic gratings have been found to have very unusual transmission properties.
Especially, unexpected high transmission has been measured at specific wavelengths. Since these first experiments,
studies have shown that very different optical processes can be responsible for these resonant transmissions, depending
on the geometry of the grating. This has led to the demonstration of first applications in imaging[2]. With our designs,
we show that such components are suitable for RGB color filters.
We first discuss the theoretical performances and the integration of these components through modeling with rigorous
electromagnetical techniques (RCWA and FDTD) in 2D and in 3D. As an example we evaluate the impact of the
pixelization, the technological errors and the illumination conditions on the filter performances. Thanks to an algorithm
that can optimize color correction matrix, we show that we have a satisfactory color rendering (dE=4.3).
In a second part we realize these samples. We sputter Al layers on glass substrates and structure them with Focused Ion
Beam technology that enables approximately 30 nm resolution. These structures are compatible in size with pixel
dimensions (1.5 μm X 1.5 μm) and are tested with a dedicated micro-spectrometer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A spectral imaging system comprising a 39-Mpixel monochrome camera, LED-based narrowband illumination, and acquisition/control software has been designed for investigations of cultural heritage objects. Notable attributes of this system, referred to as EurekaVision, include: streamlined workflow, flexibility, provision of well-structured data and metadata for downstream processing, and illumination that is safer for the artifacts. The system design builds upon experience gained while imaging the Archimedes Palimpsest and has been used in studies of a number of important objects in the LOC collection. This paper describes practical issues that were considered by EurekaVision to address key research questions for the study of fragile and unique cultural objects over a range of spectral bands. The system is intended to capture important digital records for access by researchers, professionals, and the public. The system was first used for spectral imaging of the 1507 world map by Martin Waldseemueller, the first printed map to reference "America." It was also used to image sections of the Carta Marina 1516 map by the same cartographer for comparative purposes. An updated version of the system is now being utilized by the Preservation Research and Testing Division of the Library of Congress.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the development of a charged particle pixel array imager operating in single hit detection mode. This
challenging imaging device called GigaTracker (GTK) developed for the CERN NA62 experiment is intended to
measure the position and arrival time of a pion/kaon beam with an intensity of 108*cm-2*s-1. Each particle hit is detected
in position with a precision of ~100 &mgr;m and in time with a precision of 100 ps (rmsc). The readout pixel ASIC circuit in
130 nm CMOS technology comprise an array of 1800 pixels of 300 x 300 &mgr;m2. It is connected via bump bonding to an
ultra fast solid state pixel sensor array matching the same pixel pattern. The performance of high rate and high timing
precision together with an extreme sensitivity of the pixel channel discriminating input signals with a charge as low as 1
fC (6000 electrons) are very demanding; two imaging readout architectures are presented, one with on pixel analogue
based time-to-digital-converter (TDC), the other one with digital based TDC placed at the end of each column. The
design of two pixel discriminators, a constant fraction discriminator and a time-over-threshold discriminator with time
walk correction technique will be discussed. The design challenges of two architectures will be also discussed.
Due to the high particle intensity, a huge data flow has to be handled. Hereby only data of pixels, which actually have
been hit are buffered on chip and automatically transferred off the chip. The data rate per chip is 4 Gbit/s. We present
circuit design challenges and SPICE simulations of both approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The WIYN One Degree Imager (ODI) will provide a one degree field of view for the WIYN 3.5 m telescope located on Kitt Peak near Tucson, Arizona. Its focal plane will consist of an 8x8 grid of Orthogonal Transfer Array (OTA) CCD detectors with nearly one billion pixels. The implementation of these detectors into the focal plane has required the development of several novel packaging and characterization techniques, which are the subject of this paper. We describe a new packaging/hybridization method in which the CCD die are directly bonded to aluminum nitride ceramic substrates which have indium bump on one side and brazed pins on the other. These custom packages allow good thermal conductivity, a flat imaging surface, four side buttability, and in situ testing of the devices during backside processing. We describe these carriers and the backside processing techniques used with them. We have also modified our cold probing system to screen these OTA die at wafer level to select the best candidates for backside processing. We describe these modifications and characterization results from several wafer lots.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new 5.5 &mgr;m pixel interline transfer CCD technology platform has been developed that offers significant improvements in performance while retaining the dynamic range, quantum efficiency, and responsivity available from the previous generation 7.4 µm pixel. Smear has been reduced to -100 dB, and a new quad-output architecture increases the maximum frame rate up to 120 fps for a 1 MPix sensor. This technology is now being deployed across a family of image sensors that share a common package and pin-out, facilitating faster camera design and product commercialization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For CMOS image sensors with pixel size under 3μm pixel, the pixel architecture in which several photodiodes share
floating diffusion and transistors tends to be adopted in order to improve full well capacity and sensitivity. In spite even
in the aforementioned advantage, adoption of the architecture may result in sensitivity imbalance between the shared
photodiodes. On reproduced images obtained by the shared pixel architecture, sensitivity imbalance between Gr and Gb
photodiodes in Bayer CFA is often conspicuous, because the imbalance results in horizontal pattern noise. We developed a low Gr/Gb sensitivity imbalance 3.2M CMOS Image Sensor with 2.2μm pixel. The pixel has the structure which is optically designed carefully in order to prevent light diffraction in pixel. According to a simulation result, read transistor gate for pixels with red color filter has an edgeless layout, because longer wave length light incident to the red pixels. For the optical design, electromagnetic analytical simulation was used because wave-optical effect cannot be ignored for the small pixel below 3μm. Gr/Gb sensitivity imbalance was measured for both the developed sensor and conventional sensor in visible light range. It was measured that the Gr/Gb sensitivity imbalance is below measurement limit.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a digital line-scan sensor in standard CMOS technology for high resolution scanning application in machine vision, mainly surface inspection of large panel and web materials. The sensor however has due to the unprecedented resolution also application potential in earth observation and motion picture context. The sensor features 16384 charge integrating pixels of 3.5um x 3.5um photo active area. Each pixel has it's own charge integrating transconductance amplifier circuit, a true correlated double sampling stage, sample & hold stage and a pixel level 13 bit linear AD converter. Readout is performed over 16 parallel digital output tap's operated at 50MHz pixel clock. The sensor generates at maximum speed a total data rate of 10.4Gbit/s. In order to maximize the integration time, data readout, AD conversion and integration can be performed simultaneously. Therefore even at the maximum line rate of 43kScans/second the integration time can be maintained at 20us. In order to accommodate for different application scenarios with very different lighting budget's, the sensors full well capacity can be programmed by means of a two step programmable gain from 3000e- to 40ke-. The prototype characterization results showed a total quantum efficiency of 72% at 625nm. With the full well capacity set to 26ke- the conversion gain was measured to be 0.13DN/e- with a read noise in dark of 1.7DN, or 12 e- dark noise equivalent. Over all DSNU is reduced to 3DN rms independent of the conversion gain by the on chip combination of CDS and digital DSNU correction. PRNU was measured according the EMVA1288 standard to 1.2% rms. The sensor is mounted on an "Invar" enforced COB board without glass cover for reduced reflections on optical interface stacks. Instead of traditional package leads SMD mounted board to board connectors are used for the electrical connections.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The paper describes the Near Sensor Image Processing (NSIP) paradigm developed in the early 1990s and shows that it
was a precursor to recent architectures proposed for direct (in the sensor) image processing and high dynamic range
(HDR) image sensing. Both of these architectures are based on the specific properties of CMOS light sensors, in
particular the ability to continuously monitor the accumulation of photon-induced charge as a function of time. We
further propose an extension of the original NSIP pixel to include a circuit that facilitates temporal and spatio-temporal
processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have been developing an ultra high definition television (UHDTV) system with a 7,680 horizontal by 4,320 vertical pixel resolution and a 60 Hz frame rate. This system, which is called Super Hi-vision (SHV), is expected to serve the next generation of broadcasting services. We have just completed the world's first imaging equipment that is capable of capturing video at a full SHV resolution. In designing this equipment, we decided to develop three new devices, taking into account the camera performance and the ease of implementation. First, we developed a 33-megapixel CMOS image sensor. Its pixel size of 3.8 &mgr;m sq. retained the dynamic range of the sensor above 60 dB even with a 3-transistor pixel structure. Second, a fixed focal length lens was developed to create an adequate MTF right up to the limiting resolution of the sensor. Third, we developed a signal-processing device capable of handling 72 Gbps signals and cascading boards to expand the process. SHV images with a modulation of 20% at the Nyquist frequency were obtained by using these three key technologies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
CCD is a continuum of MOS capacitors, so its big capacitance becomes one of the major disadvantages compared with
CMOS image sensor, that cause not only large power dissipation but also other problems, such as generating an electro
magnetic interference(EMI). Single-layer electrode CCD is one of the ways to reduce CCD capacitance compared with
conventional two layer CCD electrode structure.
On the other hand, image scanning system using linear image sensor is moving from lens reduction optics system to
contact type optics system, because contact type system has smaller size than lens reduction system. Image sensor for
contact optics requires much longer CCD pitch. It means that charge transfer in CCD becomes more difficult than short
pitch CCD.
We have developed a CCD linear image sensor, called "Gratron", with gradual potential channel CCD for the purpose
of accelerating charge transfer in long channel single-layer CCD. A CCD that is driven by two phase clock is fabricated
with single layer poly Si electrodes that have wider electrode gaps and longer electrode channel length.
At the sensor that has 21um pitch pixel linear array with a single sided CCD register, high charge transfer efficiency
(>99%) is obtained at 25MHz and small capacitance of CCD is realized.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dark current for back-illuminated (BI) charge-coupled-device (CCD) imagers at Lincoln Laboratory has historically been higher than for front-illuminated (FI) detectors. This is presumably due to high concentrations of unpassivated dangling bonds at or near the thinned back surface caused by wafer thinning, inadequate passivation and low quality native oxide growth. The high dark current has meant that the CCDs must be substantially cooled to be comparable to FI devices. The dark current comprises three components: frontside surface-state, bulk, and back surface. We have developed a backside passivation process that significantly reduces the dark current of BI CCDs. The BI imagers are passivated using molecular beam epitaxy (MBE) to grow a thin heavily boron-doped layer, followed by an annealing step in hydrogen. The frontside surface state component can be suppressed using surface inversion, where clock dithering reduces the frontside dark current below the bulk. This work uses surface inversion, clock dithering and comparison between FI and BI imagers as tools to determine the dark current from each of the components. MBE passivated devices, when used with clock dithering, have dark current reduced by a factor of one hundred relative to ion-implant/laser annealed devices, with measured values as low as 10-14 pA/cm2 at 20°C.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Residual Bulk Image ("RBI") was evaluated in a recently manufactured large format CCD (KAF09000). Operating at -20 C, RBI was observed more than four hours after an image exposure. A number of parameters were measured in an engineering grade CCD including charge trap capacity, filled trap leakout rate, and total trap exhaustion time for temperatures ranging from +10 C to -30 C. A NIR Flood/Flush/Integrate protocol was tested as a candidate to eliminate the RBI by pre-filling the traps and it was found effective at eliminating the RBI as well as the amplifier luminescence observed in non-flooded test exposures. It was also found that the leakage from the pre-filled traps greatly exceeded the thermal dark current over the tested temperature range leading to an increase in the overall dark shot noise. Deep cooling is required to suppress this additional dark noise component. An Arrhenius plot was used to project the required cooling levels as a function of exposure time for various read noise limits. For half hour exposures with a target 5 e- read noise limit, an operating temperature of -87.8 C was indicated. A plot of the maximum operating temperature as a function of exposure time was created from the data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is a follow-up to the paper presented at SPIE Electronic Imaging Science and Technology in San Jose, 2007,
"Characterization and system modeling of a 5-Mpixel CMOS array."
We expand and refined test methodologies used in the characterization and selection process of CMOS arrays targeting megapixel security camera applications. This paper presents work in the following areas: system gain, gain noise, binning noise, F-number response, system modeling, and temperature effects. Since security cameras must operate under harsh temperature extremes, performance under these conditions must be understood. Characterizations are made for the following areas: dark current, DSNU, hot pixels, clusters, temporal noise and spatial noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital single-lens reflex (DSLR) cameras are examined and their dark current behavior is presented. We examine the
influence of varying temperature, exposure time, and gain setting on dark current. Dark current behavior unique to
sensors within such cameras is observed. In particular, heat is trapped within the camera body resulting in higher
internal temperatures and an increase in dark current after successive images. We look at the possibility of correcting for
the dark current, based on previous work done for scientific grade imagers, where hot pixels are used as indicators for
the entire chip's dark current behavior. Standard methods of dark current correction are compared to computed dark
frames. Dark current is a concern for DSLR cameras as optimum conditions for limiting dark current, such as cooling the
imager, are not easily obtained in the typical use of such imagers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is an emerging interest in vertically-integrated CMOS (VI-CMOS) image sensors. This trend arises from
the difficulty in achieving high SNR, high dynamic range, and high frame rate with planar technologies while
maintaining small pixel sizes, since the photodetector and electronics have to share the same pixel area and
use the same technology. Fabrication methods for VI-CMOS image sensors add new degrees of freedom to
the photodetector design. Having a model that gives a good approximation to the behavior of a device under
different operating conditions is important for device optimization. This work presents a new approach in
photodetector modeling, and uses it to optimize the thickness of the photosensitive layer in VI-CMOS image
sensors. We consider a simplified structure of an a-Si:H photodetector, and develop an analytical solution and a
numerical solution to state equations taken from semiconductor physics, which are shown to be comparable. If
the photosensitive layer is too thin, our model shows that the contact resistances dominate the device and, if it
is too thick, most charge carriers recombine on their way to the contacts. Therefore, an optimum thickness can
be found.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Superconducting single photons detectors (SSPDs) have emerged in recent years as a promising alternative for
fast and sensitive infrared detectors working in the photon counting mode. In particular, those detectors combine
very low dark count rates (below 1 Hz), high speed (above 1 GHz), photon number resolution and reasonable
quantum efficiency (10% at telecom wavelengths). They already found applications in quantum cryptography
systems and integrated circuit failure analysis, but could also be used as ultimate sensors in matrix configurations.
We show here the optimization of SSPD fabrication and their optical metrology at CEA. SSPD are fabricated
by patterning a 80 nm wide nanowire in a very thin (4 nm) NbN film on sapphire, forming a pixel of several
microns size. A cryogenic all-fibered optoelectronic system has been developed and allows precise metrology of
the optical performances of SSPD. When biasing near the critical current of the nanowire, we demonstrate a
detection quantum efficiency of 8% at 1.55 µm, which is also found to be strongly polarization dependent. This
quantum efficiency being limited by optical absorption, we propose a prism coupling based absorption enhancing
structure that allows reaching 100% quantum efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is a growing need in industrial and scientific research applications for very wide dynamic range, high sensitivity imaging methods. To fill this need, an advanced back-illuminated avalanche photodiode (APD) design is presented based on crystallographically etched (100) epitaxial silicon on R-plane sapphire (SOS), enabling large single photon sensitive, solid-state focal plane arrays with very wide dynamic range. The design will enable reliable and low cost fabrication. When (100) silicon is properly etched with KOH:IPA:H2O solution through a thermally grown oxide mask, square based pyramidal frustum or mesa arrays result with the four mesa sidewalls of the APD formed by (111) silicon planes that intersect the (100) planes at a crystallographic angle of 54.7 degrees. The APD is fabricated in the silicon mesa. Very wide dynamic range operation at high quantum efficiencies exceeding 50% for 250 nm<lambda<400 nm and exceeding 80% for 400 nm<lambda<700 nm, is enabled by a carefully optimized p-type boron spatial impurity profile, fabricated by high temperature diffusion. Monte Carlo modeling confirms the majority of electron-hole pairs are photogenerated outside of the narrow device multiplication region, resulting in a very low APD excess noise factor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a day and night MOS imager based on a single plate on-chip interference color filter. The filter comprises
periodic multiple layers of TiO2 and SiO2, with an intermediate color selection layer (SiO2) to disturb the period of the
layers, analogous to a "defect" layer in the one-dimensional photonic crystal. A particular advantage of this filter is
flexibility of designing a spectral profile of each color. Thus, one unit cell of the present MOS imager is designed to have
three multi-spectral, i.e. R+IR, G+IR, B+IR, pixels and one IR dedicated pixel, which would never be realized by
ordinary pigment materials. Daytime color image signals are obtained by subtracting the IR pixel signal, as a reference,
from each signal of R+IR, G+IR and B+IR pixels. Nighttime black and white imaging is simply realized by using the IR
components of all the pixels as brightness signals. This enables seamless switching between the day and night operations
of a camera. Although the subtraction operation usually reduces the dynamic range (DR) and signal-to-ratio (SNR), in
particular at low color temperatures, we overcome the issues by employing a new design scheme of the color filter
comprising double defect layers for each visible pass band and narrow IR pass bands for common IR components. As a
result, signal degradations in SNR and DR are suppressed even at low color temperatures enabling daytime imaging in a
wide range of color temperatures from 2300 K to 6500 K.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel technique for acquiring time-sequential images during a frame-capture period of an ordinary CMOS image sensor using a multi-lens optical system is proposed. In addition, this system is utilized for high-speed three-dimensional measurement based on a spatial encoding method. In the proposed image capturing system, multiple lenses are rhomboidally arranged on a CMOS image sensor to detect multiple images for one object. Because pixel values in the CMOS image sensor are obtained sequentially, we can obtain multiple images that are captured at different timing. In the spatial encoding method, several kinds of optical patterns are projected to a measured object. It is possible to obtain multiple images in which different patterns are projected to the object within one frame period by dividing the captured images and reconstruct images from rearrange the parts of image. The principle of the proposed method was verified by three-dimensional measurement with a CMOS compound-eye image capturing. The size of the three-dimensional image was 360 x 270 pixel, and the number of steps of distance was 64, and the measurement time was 1/15 seconds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the design and we discuss in depth the operating conditions of a two-dimensional (2-D) imaging array of
single-photon detectors that provides a total of 1024 pixels, laid out in 32 rows by 32 columns array, integrated within a
monolithic silicon chip with dimensions of 3.5 mm x 3.5 mm. We employed a standard high-voltage 0.35μm CMOS
fabrication technology, with no need of any custom processing.
Each pixel consists of one Single-Photon Avalanche Diode (SPAD) and a compact front-end analog electronics followed
by a digital processing circuitry. The in-pixel front-end electronics senses the ignition of the avalanche, quenches the
detector, provides a pulse and restores the detector for detecting a subsequent photon. The processing circuitry counts
events (both photon and unwelcome "noise" ignition) within user-selectable integration time-slots and stores the count
into an in-pixel memory cell, which is read-out in 10 ns/pixel. Such a two-levels pipeline architecture allows to acquire the actual frame while contemporary reading out the previous one, thus achieving a very high free-running frame rate, with negligible inter-frame dead-time. Each pixel is therefore a completely independent photon-counter. The measured Photo Detection Efficiency (PDE) tops 43% at 5V excess-bias, while the Dark-Counting Rate (DCR) is below 4kcps (counts per second) at room temperature. The maximum frame-rate depends on the system clock; with a convenient 100MHz system clock we achieved a free-running speed of 100 kframe/s from the all 1024 pixels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a digital image sensor SOC featuring a total chip area (including dicing tolerances) of 0.34mm2 for endoscopic applications. Due to this extremely small form factor the sensor enables integration in endoscopes, guide wires and locater devices of less than 1mm outer diameter. The sensor embeds a pixel matrix of 10'000 pixels with a pitch of 3um x 3um covered with RGB filters in Bayer pattern. The sensor operates fully autonomous, controlled by an on chip ring oscillator and readout state machine, which controls integration AD conversion and data transmission, thus the sensor only requires 4 pin's for power supply and data communication. The sensor provides a frame rate of 40Frames per second over a LVDS serial data link. The endoscopic application requires that the sensor must work without any local power decoupling capacitances at the end of up to 2m cabling and be able to sustain data communication over the same wire length without deteriorating image quality. This has been achieved by implementation of a current mode successive approximation ADC and current steering LVDS data transmission. An band gap circuit with -40dB PSRR at the data frequency was implemented as on chip reference to improve robustness against power supply ringing due to the high series inductance of the long cables. The B&W versions of the sensor provides a conversion gain of 30DN/nJ/cm2 at 550nm with a read noise in dark of 1.2DN when operated at 2m cable. Using the photon transfer method according to EMVA1288 standard the full well capacity was determined to be 18ke-. According to our knowledge the presented work is the currently world smallest fully digital image sensor. The chip was designed along with a aspheric single surface lens to assemble on the chip without increasing the form factor. The extremely small form factor of the resulting camera permit's to provide visualization with much higher than state of the art spatial resolution in sub 1mm endoscopic applications, where so far only optical fiber bundles providing 1k - 3k image points could be used. In many applications, such as guide wires and locater devices the small form factor permits to implement visualization for the first time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an optical tracking method developed using Wii Remotes. The motion and rotation of an object
with markers and an accelerometer is tracked using the optical sensors in two Remotes. Initialization is complicated
by the nature of the Wii Remote's sensor: while it can track up to four points, there is no trivial means to uniquely
identify an individual marker. To resolve this ambiguity, a brute force approach is used. All possible correspondences are
considered, using the Remotes' and tracked object's accelerometers as inclination sensors to identify potentially correct
ones. The resulting method is applied to create a largely self-calibrating six-degree-of-freedom input device. We also
provide documentation sufficient for others to use the Wii Remote in research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The IRMOS (Infrared Multiobject Spectrometer) is a multi-object imaging dispersive spectrometer for
astronomy, with a micromirror array to select desired objects. In standard operation, the mirrors are used to
select multiple compact sources such that their resulting spectra do not overlap on the detector. The IRMOS
can also be operated in a Hadamard mode, in which the spectra are allowed to overlap, but are modulated by
opening the mirrors in many combinations to enable deconvolution of the individual spectra. This mode
enables integral field spectroscopy with no penalty in sensitivity relative to the standard mode. There are
minor penalties in overhead and systematics if there are sky or instrumental drifts. We explain the concept and
discuss the benefits with an example observation of the Orion Trapezium using the 2.1 m telescope at Kitt
Peak National Observatory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Authentication using finger vein pattern has substantial advantage than other biometrics. Because human vein patterns
are hidden inside the skin and tissue, it is hard to forge vein structure. But conventional system using NIR LED array has
two drawbacks. First, direct contact with LED array raise sanitary problem. Second, because of discreteness of LEDs,
non-uniform illumination exists. We propose non-contact finger vein acquisition system using NIR laser and Laser line
generator lens. Laser line generator lens makes evenly distributed line laser from focused laser light. Line laser is aimed
on the finger longitudinally. NIR camera was used for image acquisition. 200 index finger vein images from 20
candidates are collected. Same finger vein pattern extraction algorithm was used to evaluate two sets of images.
Acquired images from proposed non-contact system do not show any non-uniform illumination in contrary with
conventional system. Also results of matching are comparable to conventional system. We developed Non-contact finger
vein acquisition system. It can prevent potential cross contamination of skin diseases. Also the system can produce
uniformly illuminated images unlike conventional system. With the benefit of non-contact, proposed system shows
almost equivalent performance compared with conventional system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We demonstrate a static multiplex spectrometer based on a Fabry-Perot interferometric filter for measuring the
mean spectral content of diffuse sources. By replacing the slit of a low-dispersion grating spectrometer with
a Fabry-Perot interferometric filter, we improve the resolving power of the instrument while simultaneously
overcoming the free spectral range limitation of the Fabry Perot. The resulting instrument is smaller than conventional
spectrometers having the same resolving power. We present experimental results from the spectrometer
using neon lamp, He-Ne laser, and diode laser sources over a wavelength range from 620 nm to 660 nm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.