Finding an initial solution for traveling salesman fast and with reasonable quality is needed both by the state-of-the-art local search and branch-and-bound optimal algorithms. Classical heuristics require O(n2) and their quality may be insufficient to keep the efficiency of the local search high. In this paper, we present two fast variants that utilize a Delaunay neighborhood graph. The best methods achieve 2.66% gap for Dots datasets, and 15.04% gap for the selected TSPLIB datasets. However, these results were obtained with slower algorithms. The best of the fast methods achieved 5.76% gap for Dots datasets, and 19.07% gap for the selected TSPLIB datasets in 307 milliseconds, on average.
k-Means (KM) is well known for its ease of implementation as a clustering technique. It has been applied for color quantization in RGB, YUV, hyperspectral image, Lab, and other spaces, but this leads to fragmented segments as the pixels are clustered only in the color space without considering connectivity. The problem has been attacked by adding connectivity constraints, or using joint color and spatial features (r, g, b, x, y), which prevent fragmented and nonconvex segments. However, it does not take into account the complexity of the shape itself. The Mumford–Shah model has been earlier used to overcome this problem but with slow and complex mathematical optimization algorithms. We integrate Mumford–Shah model directly into KM context and construct a fast and simple implementation of the algorithm. The proposed approach uses standard KM algorithm with distance function derived from Mumford–Shah model so that it optimizes both the content and the shape of the segments jointly. We demonstrate by experiments that the proposed algorithm provides better results than comparative methods when compared using various error evaluation criteria. The algorithm is applied on 100 images in the Weizmann dataset and two remote sensing images.
Color separation and highly optimized context tree modeling for binary layers have provided the best compression results for color map images that consist of highly complex spatial structures but only a relatively few number of colors. We explore whether this kind of approach works on photographic and palette images as well. The main difficulty is that these images can have a much higher number of colors, and it is therefore much more difficult to exploit spatial dependencies via binary layers. The original contributions of this work include: 1. the application of context-tree-based compression (previously designed for map images) to natural and color palette images; 2. the consideration of four different methods for bit-plane separation; and 3. Extension of the two-layer context to a multilayer context for better utilization of the crosslayer correlations. The proposed combination is extensively compared to state of the art lossless image compression methods.
Map images are composed of semantic layers depicted in arbitrary color. Color separation is often needed to divide the image into layers for storage and processing. Separation can result in severe artifacts because of the overlapping of the layers. In this work, we introduce a technique to restore the original semantic layers after the color separation. The proposed restoration technique improves compression performance of the reconstructed layers in comparison to the corrupted ones when compressed by lossless algorithms such as International Communication Unit (ITU) Group 4 (TIFF G4), Portable Network Graphics (PNG), Joint Bi-level Image experts Group (JBIG), and context tree method. The resulting technique also provides good visual quality of the reconstructed image layers, and can therefore be applied for selective layer removal/extraction in other map processing applications, e.g., area measurement.
We propose a fast PNN-based O(N log N) time algorithm for multilevel non-parametric thresholding, where N denotes the size of the image histogram. The proposed PNN based multilevel thresholding algorithm is considerably faster than optimal thresholding. On a set of 8-16 bits per pixel real images, experimental results also reveal that the proposed method provides better quality than the Lloyd-Max quantizer alone. Since the time complexity of the proposed thresholding algorithm is log-linear, it is applicable in real-time image processing applications.
Mobile devices are more often capable for locating the user on the globe by using GPS or network-based systems. The location is given to the user in a meaningful context such as map or aerial image. We study compression methods for reducing the amount of data required by aerial images. We consider the two well known lossless graphics file formats GIF and PNG, the compression standards JPEG and JPEG2000, a fractal compressor known as FIASCO, and commercial wavelet-based method developed for aerial images. It is also expected that the devices support fewer than 256 gray levels. It is therefore possible to get further reduction by quantizing the images prior to compression, for example, to two-bit per pixel, and then apply lossless comrpession methods such as JBIG, GIF and PNG. For four color mobile displays error diffusion dithered images approximate the original 8-bit color images quite well. The trade-off in dithering is that the lossless compression ratios decrease. Solution to this might be to store the 8-bit images compressed by using a decent lossy compressor such as JPEG2000. Quantization and dithering should happen only at the moment when the image is displayed.
KEYWORDS: Data mining, Databases, Mining, Profiling, Internet, Data processing, Personal digital assistants, Java, Global Positioning System, Zoom lenses
Relevance is the key in defining what data is to be extracted from the Internet. Traditionally, relevance has been defined mainly by keywords and user profiles. In this paper we discuss a fairly untouched dimension to relevance: location. Any navigational information sought by a user at large on earth is evidently governed by his location. We believe that task oriented data mining of the web amalgamated with location information is the key to providing relevant information for personal navigation. We explore the existential hurdles and propose novel approaches to tackle them. We also present naive, task-oriented data mining based approaches and their implementations in Java, to extract location based information. Ad-hoc pairing of data with coordinates (x, y) is very rare on the web. But if the same co-ordinates are converted to a logical address (state/city/street), a wide spectrum of location-based information base opens up. Hence, given the coordinates (x, y) on the earth, the scheme points to the logical address of the user. Location based information could either be picked up from fixed and known service providers (e.g. Yellow Pages) or from any arbitrary website on the Web. Once the web servers providing information relevant to the logical address are located, task oriented data mining is performed over these sites keeping in mind what information is interesting to the contemporary user. After all this, a simple data stream is provided to the user with information scaled to his convenience. The scheme has been implemented for cities of Finland.
A method for lossless compression of large binary images is proposed for applications where spatial access to the image is needed. The method utilizes the advantages of (1) variable-size context modeling in a form of context trees, and (2) forward-adaptive statistical compression. New strategies for constructing the context tree are considered, including a fast two-stage bottom-up approach. The proposed technique achieves higher compression rates and allows dense tiling of images down to 50 X 50 pixels without sacrificing the compression performance. It enables partial decompression of large images far more efficiently than if the standard JBIG was applied.
Two statistical context-based filters are proposed for the enhancement of the binary document images for compression and recognition. The Simple Context Filter unconditionally changes the uncommon pixels in low information contexts, whereas the Gain-Loss Filter changes the pixels conditionally depending whether the gain in compression outweighs the loss of information. The evaluation methods and results with some traditional filtering methods are presented. The filtering methods alleviate the loss in compression performance caused by digitization noise while preserving the image quality measured as the OCR accuracy. The Gain-Loss Filter reaches approximately the compression limit estimated by the compression of the noiseless digital original.
In the present work we study image quality evaluation from an empirical point of view and want to find out (1) what kind of distortion existing compression algorithms generate; (2) how annoying a human observer finds certain types of distortion; (3) what are the possibilities of mathematically analyzing the distortion. A set of test images was compressed by several different compression algorithms and the visual quality of compressed images was evaluated by a group of humans. The distortion caused by compression can be classified to blockiness, blurring, jaggedness, ghost figures, and quantization errors. Blockiness is the most annoying type of distortion. For the other types the amount is more important than the type of the distortion. Blurring and quantization errors can be detected by local analysis of the image. Blockiness, jaggedness and ghost figures are harder to detect since they are composed of structural changes in the image.
The distortion caused by image compression is typically determined by a pixelwise measure such as mean square error. However, these types of measures do not consider global artifacts, like blockiness or blurring. In the present paper an attempt is made to model the distortion by a blockwise measure. A three-parameter distortion function is proposed that measures how well the average intensity, the contrast, and the spatial structure are preserved in the reconstructed image. The distortion in the spatial structure is described by the response of an edge detection operation. The blockwise measure operates with a scrolling block of 3 by 3 pixels to be able to detect distortion beyond the block boundaries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.