This paper presents a perceptual post filtering coder for digital colour images in YCrCb colour space. The approach
builds on our earlier perceptual coder (PC) and exploits intra-band and inter-orientation masking properties of Human
Visual System (HVS) to identify, estimate and recover the amount of perceived visual information loss due to
compression. The proposed technique applies to our earlier perceptual coder (PC) which retains most of the embedded
Block Coding with Optimized Truncation (EBCOT) features and is bit-stream compliant to the JPEG2000 standard.
We use PC coder to compress images with some information loss and hence loss of quality. The images are then
reconstructed from the compressed bit-stream with our proposed post filtering coding technique that attempts to recover
the perceived loss of visual information with a HVS model. The simulation results have shown that our proposed
perceptual post filtering coder achieves comparable or superior visual performance over that of our PC, and that of
JPEG2000 verification model 8.0 coder with both MSE and visual masking.
A preliminary investigation of encoding monochrome ultrasound images with a novel perceptually lossless coder is
presented. Based on the JPEG 2000 coding framework, the proposed coder employs a vision model to identify and
remove visually insignificant/irrelevant information. Current simulation results have shown coding performance
gains over the JPEG compliant LOCO lossless and JPEG 2000 lossless coders without any perceivable distortion.
A novel Discrete Wavelet Transform (DWT)/Motion Compensation (MC)/Differential Pulse Code Modulation (DPCM)
video compression framework is proposed in this paper. Although the Discrete Cosine Transform (DCT)/MC/DPCM is
the mainstream framework for video coders in industry and international standards, the idea of DWT/MC/DPCM has
existed for more than one decade in the literature and the investigation is still undergoing. The contribution of this work
is twofold. Firstly, the Embedded Block Coding with Optimal Truncation (EBCOT) is used here as the compression
engine for both intra- and inter-frame coding, which provides good compression ratio and embedded rate-distortion (R-D)
optimization mechanism. This is an extension of the EBCOT application from still images to videos. Secondly, this
framework offers a good interface for the Perceptual Distortion Measure (PDM) based on the Human Visual System
(HVS) where the Mean Squared Error (MSE) can be easily replaced with the PDM in the R-D optimization. Some of the
preliminary results are reported here. They are also compared with benchmarks such as MPEG-2 and MPEG-4 version 2.
The results demonstrate that under specified condition the proposed coder outperforms the benchmarks in terms of rate
vs. distortion.
new adaptive filter is presented, designed for the restoration of color images corrupted with an arbitrary noise model. The proposed filter is based on order-statistics restoration techniques and performs a linear combination of a number of difference measures to estimate the noise present in each pixel. To reduce color artifacts, the filtering is performed in a color space that decouples the intensity and color information. The choice of which color space is arbitrary; however, from experimentation it has been found that a spherical coordinate representation of the RGB pixel vectors provides the best results. As the results show, the proposed filter outperforms the current state of the art techniques for impulse, additive, and multiplicative noise types. In addition to the proposed filter, a new color performance measure is introduced based on the mean square error; however, it can be calculated using any error measure that employs differences between two input signals.
In the general multiple video object coder, more interesting objects such as a speaker or a moving object is consistently coded with higher priority. Since the priority of each object may not be fixed in the whole sequence and be variable on a frame by frame basis, it must be adjusted in a frame. In this paper, we analyze the independent rate control algorithm and the global algorithm that the QP value is controlled by static parameters, object importance or priority, the target PSNR and the weighted distortion. The priority among static parameters is analyzed and adjusted into dynamic parameters according to the visual interests or importance obtained by a camera interface. The target PSNR and the weighted distortion are proportional to magnitude, motion, and distortion. We apply these parameters for the weighted distortion control and the priority-based control leading to an efficient bit-rate distribution. As result, we have achieved that fewer bits are allocated for video objects which have less importance and more bits for those which have higher visual importance. The period to reach stability in the visual quality is reduced to less than 15 frames of the coded sequence. With respect to the PSNR, the proposed scheme shows higher quality of over 2dB than the conventional schemes. Thus the coding scheme interfaced to human-eyes proves to be an efficient video coder dealing with the multiple video objects.
This paper proposes a novel postprocessing technique for reducing blocking artifacts in low bit rate transform coding. As the quantization is applied to transform coefficients in the encoding, the proposed approach works in the transform domain. Masking effect in the human visual system is considered, and an adaptive weighting mechanism is then integrated into the postfiltering. In low activity areas, since the blocking artifacts appear to be perceptually more detectable, a large window is used to efficiently smooth out the artifacts. In order to preserve image details, a small neighborhood and a large central weight are employed for processing those high activity blocks, where the blocking artifacts are less noticeable due to the masking of local background. The quantization constraint is finally applied to the postfiltered coefficients. Experimental results show that the proposed technique provides superior performance to other postprocessing methods in both objective and subjective image quality.
The latest digital image coding technique known as the object-based coding is mainly relied on efficient segmentation algorithms. In this paper an edge detail detection technique has been proposed and it can segment the image into objects that are perceptually meaningful. It is a transform domain technique and it makes use of perceptual characteristics, such as frequency distribution decomposition and edge structure decomposition, of transform coefficients thus it detects the perceptually most significant edge details which are useful for segmentation in object-based coding and compression. An experiment has also been conducted to test its robustness to image compression.
In this paper, a perceptual blocking distortion metric for block-based transform coded digital video is proposed. The metric is based on a spatio-temporal multichannel vision model to calculate the Just Noticeable Distortion (JND) map. The blocking dominant regions are segmented after the spatio-temporal decomposition and the JNDs in these regions are summed up to form an objective measure of the blocking artifact. Subjective and objective tests have been conducted and the results show a strong correlation between the objective blocking rating and the mean opinion score.
In this paper, an efficient interpolation approach is proposed for deinterlacing within a single frame. On the basis of the edge-based line average (ELA) algorithm, two useful measurements are introduced within the analysis window in order to alleviate misleading decisions in determining the direction in which the interpolation is to be made. By efficiently estimating the directional spatial correlations of neighboring pixels, increased interpolation accuracy has been achieved. Additionally, the new method possesses a simple computation structure and is therefore easy to implement. Extensive simulations conducted for different images and video sequences have shown the efficacy of the proposed interpolator with significant improvement over previous ELA based algorithms in terms of both quantitative and perceived image quality.
For MPEG-II and other hybrid MC/DPCM/DCT based video coding standards, it is very important to reconstruct the predicted frames based on the block motion information. In case of transmission over unreliable channels, error concealment methods are introduced to recover the lost or erroneous motion vectors. In this paper, a novel side motion estimation method is proposed to recover the lost motion vectors by selecting from a candidate motion vector set. The outer boundary of the lost block is used to perform motion estimation and the recovered motion vector is the one that minimizes the squared error of the block boundary pixels between two consecutive frames. The method takes advantage of the same motion direction of most blocks and their boundaries. It releases the boundary pixel gray level continuity assumption of traditional boundary match/side match approaches so that better estimation result can be achieved. Overlapped block motion compensation is also incorporated in the proposed method to reduce the blocking artifacts. By reducing the number of motion vectors in the candidate set, the performance of the proposed algorithm can be further improved.
This paper presents a performance comparison of different loop filtering techniques in a generic hybrid video coding algorithm. This study will compare the performance of the filtering techniques by integrating each of the loop filters separately into a MPEG-1 compliant codec, and coding a number of video sequences at various bit-rates and motion compensation (MC) accuracies. The performance of the filters will be assessed in terms of the energy of the displaced frame difference. Comparisons are conducted between four filtering techniques: (1) the 1:2:1 loop filter described in ITU Recommendation H.261; (2) an MC-accuracy dependent 3-tap filter, whose tap weights are based upon a first-order Markov model of the source; (3) a spatially-adaptive filter for the blocking-effect based on the theory of Projections Onto Convex Sets (POCS); and (4) an anisotropic filter for the reduction of the blocking-effect. Our results will examine the effect that traditional low-pass loop filters have on MC prediction quality, and compare this to a POCS-based loop filter. The filtering of only the blocking-effect will also provide an indication of the contribution that the blocking-effect has on the overall high- frequency distortions that are reduced by the low-pass loop filters.
This paper presents a performance evaluation of loop filtering in a generic hybrid video coding algorithm using the projection onto convex sets (POCS) method. This study will be conducted in terms of both objective and subjective image quality metrics as well as compression gain.
This paper surveys the visual distortions introduced by a compression scheme into the reconstruction of a video sequence. Specifically, the paper will concentrate on systems utilizing motion compensation (MC), differential pulse code modulation (DPCM), and the discrete cosine transform (DCT). Such systems are exemplified by the CCITT H.261 and MPEG standards. In addition to the artifacts that have already been widely considered, such as `blocking' and `mosquito' effects, new classifications of artifacts will be presented. A concise characterization and demonstration of each artifact will be provided. This will include the specification of the distribution of the artifact within the reconstructed frames, and its correlation with the local spatial/temporal features within the sequence. Also noted will be the specific causes of the artifacts, with relation to the MC/DPCM/DCT components. Since the human visual system is, typically, the final judge of the quality of the reconstructed sequence, it is also important to note the level of severity of the artifacts that make the artifacts visually noticeable.
KEYWORDS: Video coding, Video, Image segmentation, Video compression, Detection and tracking algorithms, Reconstruction algorithms, Cameras, Signal processing, Quantization, Image compression
This paper presents a new method, called Spatial Dynamic Motion Compensation (SDMC), which applies the concept of global motion to the coding of motion trajectory information for the purpose of side information reduction in digital video signal compression. The experimental results of the SDMC algorithms presented in this paper have shown that the new method yields significant improvements in terms of bit rate reduction of motion information over the method used in MPEG-1, while maintaining a comparable reconstructed picture quality, when the video sequences possess significant global translational motion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.