We describe a cloud-based automated-publishing platform that allows third party developers to embed our software
components into their applications, enabling their users to rapidly create documents for interactive viewing, or
fulfillment via mail or retail printing. We also describe how applications built on this platform can integrate with a
variety of different consumer digital ecosystems, and how we will address the quality and scaling challenges.
Xuemei Zhang, Yuli Gao, C. Brian Atkins, Phil Cheatle, Jun Xiao, Hui Chao, Peng Wu, Daniel Tretter, David Slatter, Andrew Carter, Roland Penny, Chris Willis
The design of a computer-assisted photobook authoring solution continues to be a challenging task, since consumers
want four things from such an application: simplicity, quality, customizability and speed. Our AutoPhotobook solution
uses technology to enable a system that preserves all four characteristics, providing high quality custom photobooks
while keeping complexity and authoring time modest. We leverage both design knowledge and image understanding
algorithms to automate time-consuming tasks like image selection, grouping, cropping and layout. This streamlines the
initial creation phase, so the user is never stuck staring at a blank page wondering where to begin. Our composition
engine then allows users to easily edit the book: adding, swapping or moving objects, exploring different page layouts
and themes, and even dynamically adjusting the aspect ratio of the final book. Our technologies enable even novice
users to easily create aesthetically pleasing photobooks that tell their underlying stories. AutoPhotobook provides
advances over prior solutions in the following areas: automatic image selection and theme-based image grouping;
dynamic page layout including text support; automatic cropping; design-preserving background artwork transformation;
and a simple yet powerful user interface for personalization. In this paper, we present these technologies and illustrate
how they work together to improve the photobook authoring process.
Tone mapping refers to the conversion of luminance values recorded by a digital camera or other acquisition device, to the luminance levels available from an output device, such as a monitor or a printer. Tone mapping can improve the appearance of rendered images. Although there are a variety of algorithms available, there is little information about the image tone characteristics that produce pleasing images. We devised an experiment where preferences for images with different tone characteristics were measured. The results indicate that there is a systematic relation between image tone characteristics and perceptual image quality for images containing faces. For these images, a mean face luminance level of 46–49 CIELAB L* units and a luminance standard deviation (taken over the whole image) of 18 CIELAB L* units produced the best renderings. This information is relevant for the design of tone-mapping algorithms, particularly as many images taken by digital camera users include faces.
Many optical inspection systems today can capture surface slope information directly or indirectly. For these systems, it is possible to perform a 3-D surface reconstruction which converts surface slopes to surface heights. Since the slope information obtained in such systems tend to be noisy and sometimes heavily quantized, a noise-tolerant reconstruction method is needed. We used a simple bayes reconstruction method to improve noise tolerance, and multi-resolution processing to improve the speed of calculations. For each resolution level, the surface slopes between pixels are first calculated from the original surface slopes. Then the height reconstruction for this resolution level is calculated by solving the linear equations that relate relative heights of each point and its related surface slopes. This is done through a Bayesian method which makes it easier to incorporate prior knowledge about height ranges and noise levels. The reconstructions are done for a small window of pixels at a time for each resolution level to make the linear equations manageable. The relative height solutions from all resolution levels are then combined to generate the final height map.
This method has been used in optical inspection applications where slope data are quite noisy.
KEYWORDS: Cameras, Manufacturing, RGB color model, Colorimetry, Digital cameras, Color reproduction, Image quality, Sensors, Visual process modeling, Data analysis
As the fastest-growing consumer electronics device in history, the camera phone has evolved from a toy into a real camera that competes with the compact digital camera in image quality. Due to severe constraints in cost and size, one key question that remains unanswered for camera phones is: how good does the image quality need to be so that resource can be allocated most efficiently. In this paper, we have tried to find the color processing tolerance through a study of 24 digital cameras from six manufacturers under five different light sources. We measured both the inter-brand (across manufacturers) and intra-brand (within manufacturers) mean and standard deviation for white balance and color reproduction. The white balance results showed that most cameras didn’t follow the complete white balance model. The difference between the captured white patch and the display white point increased when the correlated color temperature (CCT) of the illuminant was further away from 6500K. The standard deviation of the red/green and blue/green ratios for the white patch also increased when the illuminant was further away from 6500K. The color reproduction results revealed a similar trend for the inter-brand and intra-brand chromatic difference of the color patches. The average inter-brand chromatic difference increased from 3.87 ΔE units for the Δ65 light (6500K) to 10.13 ΔE units for the Horizon light (2300K).
The commercial success of color sequential displays is limited by the
fact that people perceive multiple color images during pursuit and
saccadic eye movements. We conducted a psychophysical experiment to
quantify visibility of these color artifacts for different
saccadic speeds, display background brightness, and target size. An Infocus sequential-color projector was placed behind a projection screen to simulate a normal desktop display. Saccadic eye movements were induced by requiring subjects to recognize text targets displayed at two different screen locations in rapid succession. The speed of saccadic movements was varied by manipulating the distance between the two target locations. A white bar, either with or without a yellow and red color fringe on the right edge, was displayed as subjects moved their eyes for the text recognition task. The two versions of the white bar will not be distinguishable if color break-up is present, thus performance of this task can be used as a measure of color break-up. The visibility of sequential color breakup decreases with background intensity and size of the white target, and increases with saccadic speed.
We conducted a perceptual image preference experiment over the web to find our (1) if typical computer users have significant variations in their display gamma settings, and (2) if so, do the gamma settings have significant perceptual effect on the appearance of images in their web browsers. The digital image renderings used were found to have preferred tone characteristics from a previous lab- controlled experiment. They were rendered with 4 different gamma settings. The subjects were asked to view the images over the web, with their own computer equipment and web browsers. The subjects werewe asked to view the images over the web, with their own computer equipment and web browsers. The subjects made pair-wise subjective preference judgements on which rendering they liked bets for each image. Each subject's display gamma setting was estimated using a 'gamma estimator' tool, implemented as a Java applet. The results indicated that (1) the user's gamma settings, as estimated in the experiment, span a wide range from about 1.8 to about 3.0; (2) the subjects preferred images that werewe rendered with a 'correct' gamma value matching their display setting. Subjects disliked images rendered with a gamma value not matching their displays'. This indicates that display gamma estimation is a perceptually significant factor in web image optimization.
This paper describes the design and some preliminary result of a visual study that was conducted over the WWW. The subjects connected to our server over the Internet, and their own computers were controlled with Java software to produce the visual stimuli. By these means we were able to access a large population of subjects at very low cost, and we were able to conduct a large-scale study in a small amount of time. We developed tools and techniques that allowed some degree of calibration of the display and the viewing conditions, so the result obtained from the different subjects could be analyzed. We found that we could get good estimates of gamma values and pixel sizes of the subjects' displays. However, we also encountered some problems that may limit the types of experiments that can be conducted over the WWW with the present technology. We found that we could not consistently control the presentation time of the stimuli due to inconsistencies between implementations of Java on different platforms.
Oxide confined VCSELs are being developed at Hewlett-Packard for the next-generation low cost fiber optics communication applications. Compared to the existing 850 nm implant confined VCSELs, the oxide VCSELs have lower operating voltages, higher slope efficiencies, and better modal bandwidth characteristics. Preliminary data on epitaxy and oxidation control uniformity, device performance, and reliability will be discussed.
We describe computational experiments to predict the perceived quality of multilevel halftone images. Our computations were based on a spatial color difference metric, S-CIELAB, that is an extension of CIELAB, a widely used industry standard. CIELAB predicts the discriminability of large uniform color patches. S-CIELAB includes a pre- processing stage that accounts for certain aspects of the spatial sensitivity to different colors. From simulations applied to multilevel halftone images, we found that (a) for grayscale image, L-spacing of the halftone levels results in better halftone quality than linear-spacing of the levels; (b) for color images, increasing the number of halftone levels for magenta ink results in the most significant improvement in halftone quality. Increasing the number of halftone levels of the yellow ink resulted in the least improvement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.