Variable-Length Signature for Near-Duplicate Image Matching

Variable-Length Signature for Near-Duplicate Image Matching

Variable-Length Signature for Near-Duplicate Image Matching

ABSTRACT:

We propose a variable-length signature for near-duplicate image matching in this paper. An image is represented by a signature, the length of which varies with respect to the number of patches in the image. A new visual descriptor, viz., probabilistic center-symmetric local binary pattern, is proposed to characterize the appearance of each image patch. Beyond each individual patch, the spatial relationships among the patches are captured. In order to compute the similarity between two images, we utilize the earth mover’s distance which is good at handling variable-length signatures. The proposed image signature is evaluated in two different applications, i.e., near duplicate document image retrieval and near-duplicate natural image detection. The promising experimental results demonstrate the validity and effectiveness of the proposed variable-length signature.

EXISTING SYSTEM:

  • Kim employed the ordinal measures of the discrete cosine transform coefficients to represent an image. Then the L1 norm was utilized for image similarity computation.
  • Liu and Yang built a color difference histogram for an image, which encoded the color and edge orientations of the image in a uniform framework. Subsequently, the similarity of two images was computed in terms of the enhanced Canberra distance.
  • Aksoy and Haralick proposed line-angle-ratio statistics and co-occurrence variances to represent an image which were organized into a feature vector of 28 dimensions. Then different similarity measures were compared in the image retrieval scenario.
  • Meng et al. first represented an image by a 279D feature vector. For similarity computation, the enhanced Dynamic Partial Function was proposed which adaptively activated a different number of features in a pairwise manner to accommodate the characteristics of each image pair.
  • Chum et al. represented an image based on its color histograms and then employed Locality Sensitive Hashing (LSH) for fast retrieval. For the sake of computational efficiency, the vectorial representations were first embedded into binary codes in some works

DISADVANTAGES OF EXISTING SYSTEM:

  • In bag-of-visual-words model, the spatial layout of the visual words is totally disregarded, which will incur ambiguity during matching.

PROPOSED SYSTEM:

  • A visual descriptor named Probabilistic Center-symmetric Local Binary Pattern (PCSLBP) is proposed to depict the patch appearance, which is flexible in the presence of image distortions. Beyond each individual patch, we describe the relationships among the patches as well, viz. the distance between every pair of patches in the image. A weight is also assigned to each patch to indicate its contribution in identifying the image.
  • Given the characteristics of all the patches, the image is represented by a signature. The superiority of signatures over vectors in representing images is that the former vary in length across images, indicating the image’s characteristics.
  • To compute the similarity between two images, the Earth Mover’s Distance is employed in our work, thanks to its prominent ability in coping with variable-length signatures.
  • Furthermore, it is able to handle the issue of patch extraction instability naturally by allowing many-to-many patch correspondence.

ADVANTAGES OF PROPOSED SYSTEM:

  • We further justify the proposed patch extraction approach by comparing it with the commonly used image segmentation method, namely, Watershed. The comparisons are demonstrated, from which the advantage of the proposed approach is obvious.
  • To describe patch visual appearance, good robustness to image orientation, illumination and scale variations is highly desired. In our work, we propose a patch visual appearance descriptor, viz. Probabilistic Center-symmetric Local Binary Pattern (PCSLBP), which is an improvement of Centersymmetric Local Binary Pattern (CSLBP).

SYSTEM ARCHITECTURE

17

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:

  • System : Pentium IV 2.4 GHz.
  • Hard Disk : 40 GB.
  • Floppy Drive : 44 Mb.
  • Monitor : 15 VGA Colour.
  • Mouse :
  • Ram : 512 Mb.

SOFTWARE REQUIREMENTS:

  • Operating system : Windows XP/7.
  • Coding Language : MATLAB
  • Tool : MATLAB R2013A

REFERENCE:

Li Liu, Yue Lu, Senior Member, IEEE, and Ching Y. Suen, Fellow, IEEE, “Variable-Length Signature for Near-Duplicate Image Matching”, IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 24, NO. 4, APRIL 2015.

Steganography Using Reversible Texture Synthesis

Steganography Using Reversible Texture Synthesis

Steganography Using Reversible Texture Synthesis

ABSTRACT:

We propose a novel approach for steganography using a reversible texture synthesis. A texture synthesis process resamples a smaller texture image, which synthesizes a new texture image with a similar local appearance and an arbitrary size. We weave the texture synthesis process into steganography to conceal secret messages. In contrast to using an existing cover image to hide messages, our algorithm conceals the source texture image and embeds secret messages through the process of texture synthesis. This allows us to extract the secret messages and source texture from a stego synthetic texture. Our approach offers three distinct advantages. First, our scheme offers the embedding capacity that is proportional to the size of the stego texture image. Second, a steganalytic algorithm is not likely to defeat our steganographic approach. Third, the reversible capability inherited from our scheme provides functionality, which allows recovery of the source texture. Experimental results have verified that our proposed algorithm can provide various numbers of embedding capacities, produce a visually plausible texture images, and recover the source texture.

EXISTING SYSTEM:

  • Most image steganographic algorithms adopt an existing image as a cover medium. The expense of embedding secret messages into this cover image is the image distortion encountered in the stego image.
  • The most recent work has focused on texture synthesis by example, in which a source texture image is re-sampled using either pixel-based or patch-based algorithms to produce a new synthesized texture image with similar local appearance and arbitrary size.
  • Otori and Kuriyama pioneered the work of combining data coding with pixel-based texture synthesis. Secret messages to be concealed are encoded into colored dotted patterns and they are directly painted on a blank image.

DISADVANTAGES OF EXISTING SYSTEM:

  • Two Drawbacks of Existing system are:
  • First, since the size of the cover image is fixed, the more secret messages which are embedded allow for more image distortion. Consequently, a compromise must be reached between the embedding capacity and the image quality which results in the limited capacity provided in any specific cover image. Recall that image steganalysis is an approach used to detect secret messages hidden in the stego image.
  • A stego image contains some distortion, and regardless of how minute it is, this will interfere with the natural features of the cover image. This leads to the second drawback because it is still possible that an image steganalytic algorithm can defeat the image steganography and thus reveal that a hidden message is being conveyed in a stego image.

PROPOSED SYSTEM:

  • In this paper, we propose a novel approach for steganography using reversible texture synthesis. A texture synthesis process re-samples a small texture image drawn by an artist or captured in a photograph in order to synthesize a new texture image with a similar local appearance and arbitrary size.
  • We weave the texture synthesis process into steganography concealing secret messages as well as the source texture. In particular, in contrast to using an existing cover image to hide messages, our algorithm conceals the source texture image and embeds secret messages through the process of texture synthesis. This allows us to extract the secret messages and the source texture from a stego synthetic texture.
  • The three fundamental differences between our proposed message-oriented texture synthesis and the conventional patchbased texture synthesis are described in following: The first difference is the shape of the overlapped area. During the conventional synthesis process, an L-shape overlapped area is normally used to determine the similarity of every candidate patch. In contrast, the shape of the overlapped area in our algorithm varies because we have pasted source patches into the workbench. Consequently, our algorithm needs to provide more flexibility in order to cope with a number of variable shapes formed by the overlapped area.

ADVANTAGES OF PROPOSED SYSTEM:

  • Our approach offers three advantages.
  • First, since the texture synthesis can synthesize an arbitrary size of texture images, the embedding capacity which our scheme offers is proportional to the size of the stego texture image.
  • Secondly, a steganalytic algorithm is not likely to defeat this steganographic approach since the stego texture image is composed of a source texture rather than by modifying the existing image contents.
  • Third, the reversible capability inherited from our scheme provides functionality to recover the source texture. Since the recovered source texture is exactly the same as the original source texture, it can be employed to proceed onto the second round of secret messages for steganography if needed.

SYSTEM ARCHITECTURE:

 16

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:

  • System : Pentium IV 2.4 GHz.
  • Hard Disk : 40 GB.
  • Floppy Drive : 44 Mb.
  • Monitor : 15 VGA Colour.
  • Mouse :
  • Ram : 512 Mb.

SOFTWARE REQUIREMENTS:

  • Operating system : Windows XP/7.
  • Coding Language : MATLAB
  • Tool : MATLAB R 2007B

REFERENCE:

Kuo-Chen Wu and Chung-Ming Wang, Member, IEEE, “Steganography Using Reversible Texture Synthesis”, IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 24, NO. 1, JANUARY 2015.

Robust Representation and Recognition of Facial Emotions Using Extreme Sparse Learning

Robust Representation and Recognition of Facial Emotions Using Extreme Sparse Learning

Robust Representation and Recognition of Facial Emotions Using Extreme Sparse Learning

ABSTRACT:

Recognition of natural emotions from human faces is an interesting topic with a wide range of potential applications, such as human-computer interaction, automated tutoring systems, image and video retrieval, smart environments, and driver warning systems. Traditionally, facial emotion recognition systems have been evaluated on laboratory controlled data, which is not representative of the environment faced in real-world applications. To robustly recognize the facial emotions in real-world natural situations, this paper proposes an approach called extreme sparse learning, which has the ability to jointly learn a dictionary (set of basis) and a nonlinear classification model. The proposed approach combines the discriminative power of extreme learning machine with the reconstruction property of sparse representation to enable accurate classification when presented with noisy signals and imperfect data recorded in natural settings. In addition, this paper presents a new local spatio-temporal descriptor that is distinctive and pose-invariant. The proposed framework is able to achieve the state-of-the-art recognition accuracy on both acted and spontaneous facial emotion databases.

EXISTING SYSTEM:

  • Techniques that exploit the dynamics of facial emotion include hidden Markov models, dynamic Bayesian networks, geometrical displacement, and dynamic texture descriptors.
  • Recently, several methods have been developed to train a classification oriented dictionary.
  • These methods can be divided into three broad categories.
  • The first category of methods directly forces the dictionary atoms to be discriminative and uses the reconstruction error for the final classification.
  • The second approach makes the sparse coefficients discriminative by incorporating the classification error term into the dictionary learning and indirectly propagates the discrimination power to the overall dictionary.
  • The third category includes methods that apply the discriminative criterion for coefficients, but the classifier is not necessarily trained along with DL.

DISADVANTAGES OF EXISTING SYSTEM:

  • To the best of our knowledge, none of the existing methods can learn a non-linear classifier in the context of simultaneous sparse coding and classifier training. Learning such a non-linear classifier is not only an interesting research topic, but also very important in many real-world applications where the observations are not probably linearly separable.

PROPOSED SYSTEM:

  • The objective of the present work is to develop a facial emotion recognition system that is capable of handling variations in facial pose, illumination, and partial occlusion.
  • The proposed system robustly represents the facial emotions using a novel spatio-temporal descriptor based on Optical Flow (OF), which is distinctive and pose-invariant. Robustness to pose variations is achieved by extracting features that depend only on relative movements of different facial regions.
  • To recognize the emotions in the presence of self-occlusion and illumination variations, we combine the idea of sparse representation with Extreme Learning Machine (ELM) to learn a powerful classifier that can handle noisy and imperfect data.

ADVANTAGES OF PROPOSED SYSTEM:

  • To the best of our knowledge, this is the first attempt in the literature to simultaneously learn the sparse representation of the signal and train a non-linear classifier based on sparse codes.
  • A pose-invariant OF-based spatio-temporal descriptor, which is able to robustly represent facial emotions even when there are head movements while expressing an emotion. The proposed descriptor is capable of characterizing both the intensity and dynamics of facial emotions.
  • This paper is the first research work that explores how to simultaneously learn the sparse representation of the signal and train a non-linear classifier to be discriminative for sparse codes.
  • Our results clearly demonstrate the robustness of the proposed emotion recognition system, especially in challenging scenarios that involve illumination changes, occlusion, and pose variations

SYSTEM ARCHITECTURE:

15

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:

  • System : Pentium IV 2.4 GHz.
  • Hard Disk : 40 GB.
  • Floppy Drive : 44 Mb.
  • Monitor : 15 VGA Colour.
  • Mouse :
  • Ram : 512 Mb.

SOFTWARE REQUIREMENTS:

  • Operating system : Windows XP/7.
  • Coding Language : MATLAB
  • Tool : MATLAB R2013A

REFERENCE:

Seyedehsamaneh Shojaeilangari, Wei-Yun Yau, Senior Member, IEEE, Karthik Nandakumar, Member, IEEE, Jun Li, and Eam Khwang Teoh, Member, IEEE, “Robust Representation and Recognition of Facial Emotions Using Extreme Sparse Learning”, IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 24, NO. 7, JULY 2015.

Reversible Image Data Hiding with Contrast Enhancement

Reversible Image Data Hiding with Contrast Enhancement

Reversible Image Data Hiding with Contrast Enhancement

ABSTRACT:

In this letter, a novel reversible data hiding (RDH) algorithm is proposed for digital images. Instead of trying to keep the PSNR value high, the proposed algorithm enhances the contrast of a host image to improve its visual quality. The highest two bins in the histogram are selected for data embedding so that histogram equalization can be performed by repeating the process. The side information is embedded along with the message bits into the host image so that the original image is completely recoverable. The proposed algorithm was implemented on two sets of images to demonstrate its efficiency. To our best knowledge, it is the first algorithm that achieves image contrast enhancement by RDH. Furthermore, the evaluation results show that the visual quality can be preserved after a considerable amount of message bits have been embedded into the contrast-enhanced images, even better than three specific MATLAB functions used for image contrast enhancement.

EXISTING SYSTEM:

  • Reversible Data Hiding (RDH) has been intensively studied in the community of signal processing. Also referred as invertible or lossless data hiding, RDH is to embed a piece of information into a host signal to generate the marked one, from which the original signal can be exactly recovered after extracting the embedded data.
  • The technique of RDH is useful in some sensitive applications where no permanent change is allowed on the host signal.
  • In the literature, most of the proposed algorithms are for digital images to embed invisible data or visible watermark. To evaluate the performance of a RDH algorithm, the hiding rate and the marked image quality are important metrics.

DISADVANTAGES OF EXISTING SYSTEM:

  • There exists a trade-off between them because increasing the hiding rate often causes more distortion in image content.
  • To our best knowledge, there is no existing RDH algorithm that performs the task of contrast enhancement so as to improve the visual quality of host images.
  • To measure the distortion, the peak signal-to-noise ratio (PSNR) value of the marked image is often calculated. Generally speaking, direct modification of image histogram provides less embedding capacity.

PROPOSED SYSTEM:

  • In this project, we aim at inventing a new RDH algorithm to achieve the property of contrast enhancement instead of just keeping the PSNR value high. In principle, image contrast enhancement can be achieved by histogram equalization.
  • To perform data embedding and contrast enhancement at the same time, the proposed algorithm is performed by modifying the histogram of pixel values. Firstly, the two peaks (i.e. the highest two bins) in the histogram are found out. The bins between the peaks are unchanged while the outer bins are shifted outward so that each of the two peaks can be split into two adjacent bins.
  • To increase the embedding capacity, the highest two bins in the modified histogram can be further chosen to be split, and so on until satisfactory contrast enhancement effect is achieved. To avoid the overflows and under-flows due to histogram modification, the bounding pixel values are pre-processed and a location map is generated to memorize their locations. For the recovery of the original image, the location map is embedded into the host image, together with the message bits and other side information. So blind data extraction and complete recovery of the original image are both enabled.

ADVANTAGES OF PROPOSED SYSTEM:

  • Distortion is less.
  • To increase the embedding capacity.
  • Increase visual quality.
  • The proposed algorithm was applied to two set of images to demonstrate its efficiency. To our best knowledge, it is the first algorithm that achieves image contrast enhancement by RDH.
  • Furthermore, the evaluation results show that the visual quality can be preserved after a considerable amount of message bits have been embedded into the contrast-enhanced images, even better than three specific MATLAB functions used for image contrast enhancement.

SYSTEM ARCHITECTURE:

14

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:

  • System : Pentium IV 2.4 GHz.
  • Hard Disk : 40 GB.
  • Floppy Drive : 44 Mb.
  • Monitor : 15 VGA Colour.
  • Mouse :
  • Ram : 512 Mb.

SOFTWARE REQUIREMENTS:

  • Operating system : Windows XP/7.
  • Coding Language : MATLAB
  • Tool : MATLAB R2013A

REFERENCE:

Hao-Tian Wu, Member, IEEE, Jean-Luc Dugelay, Fellow, IEEE, and Yun-Qing Shi, Fellow, IEEE , “Reversible Image Data Hiding with Contrast Enhancement”, IEEE SIGNAL PROCESSING LETTERS, VOL. 22, NO. 1, JANUARY 2015.

Revealing the Trace of High-Quality JPEG Compression Through Quantization Noise Analysis

Revealing the Trace of High-Quality JPEG Compression Through Quantization Noise Analysis

Revealing the Trace of High-Quality JPEG Compression Through Quantization Noise Analysis

ABSTRACT:

To identify whether an image has been JPEG compressed is an important issue in forensic practice. The state-of-the-art methods fail to identify high-quality compressed images, which are common on the Internet. In this paper, we provide a novel quantization noise-based solution to reveal the traces of JPEG compression. Based on the analysis of noises in multiple-cycle JPEG compression, we define a quantity called forward quantization noise. We analytically derive that a decompressed JPEG image has a lower variance of forward quantization noise than its uncompressed counterpart. With the conclusion, we develop a simple yet very effective detection algorithm to identify decompressed JPEG images. We show that our method outperforms the state-of-the-art methods by a large margin especially for high-quality compressed images through extensive experiments on various sources of images. We also demonstrate that the proposed method is robust to small image size and chroma subsampling. The proposed algorithm can be applied in some practical applications, such as Internet image classification and forgery detection.

EXISTING SYSTEM:

  • Traces of JPEG compression may also be found in the histogram of DCT coefficients. Luo et al. noted that JPEG compression reduces the amount of DCT coefficients with an absolute value no larger than one. There are less DCT coefficients in the range of [−1,1] after JPEG compression. A discriminative statistics based on measuring the amount of DCT coefficients in the range of [−2,2] is constructed. When the statistics of a test image exceeds a threshold, it is classified as uncompressed. Otherwise, it is identified as having been previously JPEG compressed.
  • Although Luoet al.’s method is considered as the current state of the art in terms of its identification performance, it has a few shortcomings. First, the analysis only uses a portion of the DCT coefficients that are close to 0. Hence, information is not optimally utilized. Second, the method requires the quantization step to be no less than 2 to be effective. As a result, this method fails on high-quality compressed image such as those with a quantization table containing mostly quantization steps being ones.

DISADVANTAGES OF EXISTING SYSTEM:

  • High quality compression is not achieved.
  • Existing method fails on high-quality compressed image such as those with a quantization table containing mostly quantization steps being ones.

PROPOSED SYSTEM:

  • In this paper, we focus on the problem of identifying whether an image currently in uncompressed form is truly uncompressed or has been previously JPEG compressed. Being able to identify such a historical record may help to answer some forensics questions related to the originality and the authenticity of an image, such as where is the image coming from, whether it is an original one, or whether any tampering operation has been performed.
  • In this paper, we propose a method to reveal the traces of JPEG compression. The proposed method is based on analyzing the forward quantization noise, which is obtained by quantizing the block-DCT coefficients with a step of one. A decompressed JPEG image has a lower noise variance than its uncompressed counterpart. Such an observation can be derived analytically.
  • The main contribution of this work is to address the challenges posed by high-quality compression in JPEG compression identification. Specifically, our method is able to detect the images previously compressed with IJG QF=99 or 100, and Photoshop QF from 90 to 100.

ADVANTAGES OF PROPOSED SYSTEM:

  • Show that high-quality compressed images.
  • Experiments show that high-quality compressed images are common on the Internet, and our method is effective to identify them. Besides, our method is robust to small image size and color sub-sampling in chrominance channels.
  • The proposed method can be applied to Internet image classification and forgery detection with relatively accurate results.
  • We show that our method outperforms the previous methods by a large margin for high-quality JPEG compressed images which are common on the Internet and present a challenge for identifying their compression history.

SYSTEM ARCHITECTURE:

13

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:

  • System : Pentium IV 2.4 GHz.
  • Hard Disk : 40 GB.
  • Floppy Drive : 44 Mb.
  • Monitor : 15 VGA Colour.
  • Mouse :
  • Ram : 512 Mb.

SOFTWARE REQUIREMENTS:

  • Operating system : Windows XP/7.
  • Coding Language : MATLAB
  • Tool : MATLAB R2013A

REFERENCE:

Bin Li, Member, IEEE, Tian-Tsong Ng, Xiaolong Li, Shunquan Tan, Member, IEEE, and Jiwu Huang, Senior Member, IEEE, “Revealing the Trace of High-Quality JPEG Compression Through Quantization Noise Analysis”, IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 10, NO. 3, MARCH 2015.

Pareto-Depth for Multiple-Query Image Retrieval

Pareto-Depth for Multiple-Query Image Retrieval

Pareto-Depth for Multiple-Query Image Retrieval

ABSTRACT:

Most content-based image retrieval systems consider either one single query, or multiple queries that include the same object or represent the same semantic information. In this paper, we consider the content-based image retrieval problem for multiple query images corresponding to different image semantics. We propose a novel multiple-query information retrieval algorithm that combines the Pareto front method with efficient manifold ranking. We show that our proposed algorithm outperforms state of the art multiple-query retrieval algorithms on real-world image databases. We attribute this performance improvement to concavity properties of the Pareto fronts, and prove a theoretical result that characterizes the asymptotic concavity of the fronts.

EXISTING SYSTEM:

  • Many other multiple query retrieval algorithms are designed specifically for the single-semantic-multiple-query problem, and again tend to find images related to only one, or a few, of the queries.
  • Xu et al. introduced an algorithm called Efficient Manifold Ranking (EMR) which uses an anchor graph to do efficient manifold ranking that can be applied to large-scale datasets.
  • Sharifzadeh and Shahabi introduced Spatial Skyline Queries (SSQ) which is similar to the multiple-query retrieval problem. However, since EMR is not a metric (it doesn’t satisfy the triangle inequality), the relation between the first Pareto front and the convex hull of the queries, which is exploited by Sharifzadeh and Shahabi, does not hold in our setting.

DISADVANTAGES OF EXISTING SYSTEM:

  • Existing System algorithms are designed for the case that the queries represent the same semantics. In the multiple-query retrieval setting this case is not very interesting as it can easily be handled by other methods, including linear scalarization.
  • CBIR methods usually suffer from the “curse of dimensionality” and low computational efficiency when using high-dimensional features in large databases.
  • Existing systems with hashing has major drawback of optimization in order to obtain accurate hash functions is very time consuming.

PROPOSED SYSTEM:

  • In this paper, we propose a novel algorithm for multiple query image retrieval that combines the Pareto front method (PFM) with efficient manifold ranking (EMR).
  • The first step in our PFM algorithm is to issue each query individually and rank all samples in the database based on their dissimilarities to the query. Several methods for computing representations of images, like SIFT and HoG, have been proposed in the computer vision literature, and any of these can be used to compute the image dissimilarities.
  • Since it is very computationally intensive to compute the dissimilarities for every sample-query pair in large databases, we use a fast ranking algorithm called Efficient Manifold Ranking (EMR) to compute the ranking without the need to consider all sample-query pairs.
  • The next step in our PFM algorithm is to use the ranking produced by EMR to create Pareto points, which correspond to dissimilarities between a sample and every query. Sets of
  • Pareto-optimal points, called Pareto fronts, are then computed. The first Pareto front (depth one) is the set of non-dominated points, and it is often called the Skyline in the database community. The second Pareto front (depth two) is obtained by removing the first Pareto front, and finding the non-dominated points among the remaining samples. This procedure continues until the computed Pareto fronts contain enough samples to return to the user, or all samples are exhausted. The process of arranging the points into Pareto fronts is called non-dominated sorting.

ADVANTAGES OF PROPOSED SYSTEM:

  • In this paper we consider the more challenging problem of finding images that are relevant to multiple queries that represent different image semantics.
  • EMR can efficiently discover the underlying geometry of the given database and significantly reduces the computational time of traditional manifold ranking. Since EMR has been successfully applied to single query image retrieval, it is the natural ranking algorithm to consider for the multiple-query problem.
  • Our method also differs from SSQ and other Skyline research because we use multiple fronts to rank items instead of using only Skyline queries. We also address the problem of combining EMR with the Pareto front method for multiple queries associated with different concepts, resulting in non-convex Pareto fronts. To the best of our knowledge, this problem has not been widely researched.

SYSTEM ARCHITECTURE:

12

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:

  • System : Pentium IV 2.4 GHz.
  • Hard Disk : 40 GB.
  • Floppy Drive : 44 Mb.
  • Monitor : 15 VGA Colour.
  • Mouse :
  • Ram : 512 Mb.

SOFTWARE REQUIREMENTS:

  • Operating system : Windows XP/7.
  • Coding Language : MATLAB
  • Tool : MATLAB R2013A

REFERENCE:

Ko-Jen Hsiao, Jeff Calder, Member, IEEE, and Alfred O. Hero, III, Fellow, IEEE, “Pareto-Depth for Multiple-Query Image Retrieval”, IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 24, NO. 2, FEBRUARY 2015.

Multifocus Image Fusion Based on NSCT and Focused Area Detection

Multifocus Image Fusion Based on NSCT and Focused Area Detection

Multifocus Image Fusion Based on NSCT and Focused Area Detection

ABSTRACT:

To overcome the difficulties of sub-band coefficients selection in multiscale transform domain-based image fusion and solve the problem of block effects suffered by spatial domain-based image fusion, this paper presents a novel hybrid multifocus image fusion method. First, the source multifocus images are decomposed using the nonsubsampled contourlet transform (NSCT). The low-frequency sub-band coefficients are fused by the sum-modified-Laplacian-based local visual contrast, whereas the high-frequency sub-band coefficients are fused by the local Log-Gabor energy. The initial fused image is subsequently reconstructed based on the inverse NSCT with the fused coefficients. Second, after analyzing the similarity between the previous fused image and the source images, the initial focus area detection map is obtained, which is used for achieving the decision map obtained by employing a mathematical morphology postprocessing technique. Finally, based on the decision map, the final fused image is obtained by selecting the pixels in the focus areas and retaining the pixels in the focus region boundary as their corresponding pixels in the initial fused image. Experimental results demonstrate that the proposed method is better than various existing transform-based fusion methods, including gradient pyramid transform, discrete wavelet transform, NSCT, and a spatial-based method, in terms of both subjective and objective evaluations.

EXISTING SYSTEM:

  • The importance of image fusion in current image processing systems is increasing, primarily because of the increased number and variety of image acquisition techniques. The purpose of image fusion is to combine different images from several sensors or the same sensor at different times to create a new image that will be more accurate and comprehensive and, thus, more suitable for a human operator or other image processing tasks.
  • Currently, image fusion technology has been widely used in digital imaging, remote sensing, biomedical imaging, computer vision, and so on. The MST-based image fusion method can significantly enhance the visual effect, but in the focus area of the source image, clarity of the fused image will have different degrees of loss. That is because, in the process of Multi-scale decomposition and reconstruction, improper selection of fusion rules often causes the loss of useful information in the source image.

DISADVANTAGES OF EXISTING SYSTEM:

  • Loss of useful information.

PROPOSED SYSTEM:

  • This paper proposes a novel image fusion framework for multi-focus images, which relies on the NSCT domain and focused area detection. The process of fusion is divided into two stages: initial fusion and final fusion.
  • In the process of initial fusion, the SML based local visual contrast rule and local Log-Gabor energy rule are selected as the fusion scheme for low- and high-frequency coefficients of the NSCT domain, respectively. For fusing the low-frequency coefficients, the model of the SML based local visual contrast is used. Using this model, the contrast representation are selected from low frequency coefficients and combined into the fused one. The Log-Gabor Energy in NSCT domain is proposed and used to combine high-frequency coefficients. The main benefit of Log-Gabor Energy is that it selects and combines the most prominent edge and texture information contained in the high frequency coefficients.
  • Based on the result of initial fused image, morphological opening and closing are employed for post-processing to generate a fusion decision diagram. According to the fusion decision diagram, pixels of the source image and the initial fusion image are selected to obtain the final fusion image.
  • Further, the proposed method can provide a better performance than the current fusion methods whatever the source images are clean or noisy.

ADVANTAGES OF PROPOSED SYSTEM:

  • This method, which synthesizes the advantages of both the transform-based and spatial-based methods, not only overcomes the defects of MST-based methods, but also eliminates “block effect”.

SYSTEM ARCHITECTURE:

11

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:

  • System : Pentium IV 2.4 GHz.
  • Hard Disk : 40 GB.
  • Floppy Drive : 44 Mb.
  • Monitor : 15 VGA Colour.
  • Mouse :
  • Ram : 512 Mb.

SOFTWARE REQUIREMENTS:

  • Operating system : Windows XP/7.
  • Coding Language : MATLAB
  • Tool : MATLAB R2013A

REFERENCE:

Yong Yang, Member, IEEE, Song Tong, Shuying Huang, and Pan Lin, “Multifocus Image Fusion Based on NSCT and Focused Area Detection”, IEEE SENSORS JOURNAL, VOL. 15, NO. 5, MAY 2015.

Image Super-Resolution Based on Structure-Modulated Sparse Representation

Image Super-Resolution Based on Structure-Modulated Sparse Representation

Image Super-Resolution Based on Structure-Modulated Sparse Representation

ABSTRACT:

Sparse representation has recently attracted enormous interests in the field of image restoration. The conventional sparsity-based methods enforce sparse coding on small image patches with certain constraints. However, they neglected the characteristics of image structures both within the same scale and across the different scales for the image sparse representation. This drawback limits the modeling capability of sparsity-based super-resolution methods, especially for the recovery of the observed low-resolution images. In this paper, we propose a joint super-resolution framework of structure modulated sparse representations to improve the performance of sparsity-based image super-resolution. The proposed algorithm formulates the constrained optimization problem for high resolution image recovery. The multistep magnification scheme with the ridge regression is first used to exploit the multi-scale redundancy for the initial estimation of the high-resolution image. Then, the gradient histogram preservation is incorporated as a regularization term in sparse modeling of the image super resolution problem. Finally, the numerical solution is provided to solve the super-resolution problem of model parameter estimation and sparse representation. Extensive experiments on image super-resolution are carried out to validate the generality, effectiveness, and robustness of the proposed algorithm. Experimental results demonstrate that our proposed algorithm, which can recover more fine structures and details from an input low-resolution image, outperforms the state-of-the-art methods both subjectively and objectively in most cases.

EXISTING SYSTEM:

  • In real world scenarios, the low-resolution (LR) images are generally captured in many imaging applications, such as surveillance video, consumer photographs remote sensing, magnetic resonance (MR) imaging and video standard conversion.
  • The resolution of images is limited by the image acquisition devices, the optics, the hardware storage and other constraints in digital imaging systems. However, high-resolution (HR) images or videos are usually desired for subsequent image processing and analysis in most real applications. As an effective way to solve this problem, super-resolution (SR) techniques aim to reconstruct HR images from the observed LR images.
  • The super-resolution reconstruction increases high-frequency components and removes the undesirable effects, e.g., the resolution degradation, blur and noise. Recently, numerous SR methods have appeared to estimate the relationship between the LR and HR image patches with promising results. Some typical methods usually need a large and representative database of the LR and HR image pairs.

DISADVANTAGES OF EXISTING SYSTEM:

  • Existing system increase edge halos, blurring and aliasing artifacts.

PROPOSED SYSTEM:

  • We propose a novel joint framework of the structure-modulated sparse representation (SMSR) for single image super-resolution. The multi-scale similarity redundancy is investigated and exploited for the initial estimation of the target HR image. The image gradient histogram of a LR input is incorporated as a gradient regularization term of the image sparse representation model. The proposed SMSR algorithm employs the gradient prior and non locally centralized sparsity to design the constrained optimization problem for dictionary training and HR image reconstruction. The main contributions of our work can be summarized as follows:
  • The multi-step magnification scheme with the ridge regression is proposed to initialize the target HR image for the solution of image SR problem;
  • The novel sparsity-based super-resolution model is proposed with the combination of multiple image priors on the structural self-similarity, the gradient histogram and the nonlocal sparsity;
  • The gradient histogram preservation (GHP) is theoret-ically deduced for image SR reconstruction and also incorporated as the regularization term for the sparse modeling of HR image recovery.

ADVANTAGES OF PROPOSED SYSTEM:

  • Our proposed algorithm can recover more fine structures and details from an input low-resolution image, outperforms the state-of-the-art methods both subjectively and objectively in most cases.

SYSTEM ARCHITECTURE:

10

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:

  • System : Pentium IV 2.4 GHz.
  • Hard Disk : 40 GB.
  • Floppy Drive : 44 Mb.
  • Monitor : 15 VGA Colour.
  • Mouse :
  • Ram : 512 Mb.

SOFTWARE REQUIREMENTS:

  • Operating system : Windows XP/7.
  • Coding Language : MATLAB
  • Tool : MATLAB R2013A

REFERENCE:

Yongqin Zhang, Member, IEEE, Jiaying Liu, Member, IEEE, Wenhan Yang, and Zongming Guo, Member, IEEE, “Image Super-Resolution Based on Structure-Modulated Sparse Representation”, IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 24, NO. 9, SEPTEMBER 2015

Image Denoising by Exploring External and Internal Correlations

Image Denoising by Exploring External and Internal Correlations

Image Denoising by Exploring External and Internal Correlations

ABSTRACT:

Single image denoising suffers from limited data collection within a noisy image. In this paper, we propose a novel image denoising scheme, which explores both internal and external correlations with the help of web images. For each noisy patch, we build internal and external data cubes by finding similar patches from the noisy and web images, respectively. We then propose reducing noise by a two-stage strategy using different filtering approaches. In the first stage, since the noisy patch may lead to inaccurate patch selection, we propose a graph based optimization method to improve patch matching accuracy in external denoising. The internal denoising is frequency truncation on internal cubes. By combining the internal and external denoising patches, we obtain a preliminary denoising result. In the second stage, we propose reducing noise by filtering of external and internal cubes, respectively, on transform domain. In this stage, the preliminary denoising result not only enhances the patch matching accuracy but also provides reliable estimates of filtering parameters. The final denoising image is obtained by fusing the external and internal filtering results. Experimental results show that our method constantly outperforms state-of-the-art denoising schemes in both subjective and objective quality measurements, e.g., it achieves >2 dB gain compared with BM3D at a wide range of noise levels.

EXISTING SYSTEM:

  • During few past decades we were using pixel level filtering methods, like Gaussian filtering, Bilateral filtering and total variation regularization and patch filtering methods, such as non-local means block matching 3D filtering(BM3D) and low rank regularization.
  • Besides Single-image based de-noising methods, other promising de-noising methods are learning based such as fields of experts, maximizing expected patch log likelihood (EPLL) and neural network training.
  • They restore the noisy image by integrating natural image priors into the under-constrained restoration problem. The image denoising performance was then go with using landmark and multi-view images as a consideration of getting correlated images as an external dataset.
  • This process of using correlated images has springed up in many computer vision and image completion, image compression sketch to photo, image super-resolution DE blurring and de-noising.

DISADVANTAGES OF EXISTING SYSTEM:

  • Single image based de-noising performance is dropped seriously due to increasing noise level after recovery.
  • Since the noise level is high, the accuracy will suffer from significant loss.
  • BM3D is noteworthy that they utilize the same database for all kinds of noisy images. i.e., there is no prior for the noisy image scene being used.it will result in annoying artifacts.
  • The system which obtain the correlated images as external datasets and images captured by multi-view camera will explore only the external correlation without exploring internal correlations

PROPOSED SYSTEM:

  • In this paper we propose our system with the extension of existing system for image denoising by exploring both internal and external correlations. Correlations and a graph optimization method to improve patch matching accuracy and introduce a more effective filtering methods.
  • In this paper we have two contribution in first stage we design different external and internal filtering strategies to remove its noise. In the external denoising, the graph based optimization method to improve patch matching accuracy between a noisy patch and clean patches in external correlated images is proposed.
  • In the internal denoising, 3D frequency domain filtering is performed. These two denoising results are then combined in frequency domain to produce a preliminary denoising image.
  • In second stage, we propose a two-stage based denoising strategy to fully take advantage of external and internal correlations. The de-noising result at the first stage is used to improve image registration, patch matching and estimation of filtering parameters.

ADVANTAGES OF PROPOSED SYSTEM:

  • In our system, the correlated images captured by different settings like focal length, view point, resolution.
  • Our scheme could well handle noisy patches that have no matched patches in the external dataset.

SYSTEM ARCHITECTURE:

9

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:

  • System : Pentium IV 2.4 GHz.
  • Hard Disk : 40 GB.
  • Floppy Drive : 44 Mb.
  • Monitor : 15 VGA Colour.
  • Mouse :
  • Ram : 512 Mb.

SOFTWARE REQUIREMENTS:

  • Operating system : Windows XP/7.
  • Coding Language : MATLAB
  • Tool : MATLAB R 2007B

REFERENCE:

Huanjing Yue, Xiaoyan Sun, Senior Member, IEEE, Jingyu Yang, Member, IEEE, and Feng Wu, Fellow, IEEE, “Image Denoising by Exploring External and Internal Correlations”, IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 24, NO. 6, JUNE 2015.

Fractal Analysis for Reduced Reference Image Quality Assessment

Fractal Analysis for Reduced Reference Image Quality Assessment

Fractal Analysis for Reduced Reference Image Quality Assessment

ABSTRACT:

In this paper, multi fractal analysis is adapted to reduced-reference image quality assessment (RR-IQA). A novel RR-QA approach is proposed, which measures the difference of spatial arrangement between the reference image and the distorted image in terms of spatial regularity measured by fractal dimension. An image is first expressed in Log-Gabor domain. Then, fractal dimensions are computed on each Log-Gabor sub band and concatenated as a feature vector. Finally, the extracted features are pooled as the quality score of the distorted image using _1 distance. Compared with existing approaches, the proposed method measures image quality from the perspective of the spatial distribution of image patterns. The proposed method was evaluated on seven public benchmark data sets. Experimental results have demonstrated the excellent performance of the proposed method in comparison with state-of-the-art approaches.

EXISTING SYSTEM:

  • To quantify how much an image is affected by degradation, a metric is required ti evaluate how good an image is in visual meaning for human visual system (HVS).This leads to the research on image quality assessment (IQA).
  • As the ultimate solution, subjective IQA has its advantages on the reliability and consistency with human, for the quality is directly quantified by observes. Nevertheless, the practicality of subjective IQA is very limited because it is expensive and time-consuming.

DISADVANTAGES OF EXISTING SYSTEM:

  • Histogram-based method cannot correctly reflect the degree of the distortion

PROPOSED SYSTEM:

  • In this paper implemented by using Log-Gabor representation and fractal analysis. The former aims to create a complete basis of visual perception, while the latter can encode spatial information in the form of the geometrical distribution of visual data.
  • Specifically, the proposed RR-IQA feature, called spectrum of spatial regularity (SSR), characterizes the spatial distribution of image structures based on fractal analysis. The spatial-frequency components of image are first extracted by Log-Gabor filtering. Then fractal dimension is used to measure the spatial regularity of the arrangements in each Log-Gabor sub band. Finally all the computed fractal dimensions are collected as a feature vector. By using fractal analysis that has a strong correlation with HVS, the image structures are well encoded and the difference of their spatial arrangements between images can be well characterized.
  • Unlike previous work based on fractal analysis, in our method fractal analysis is working on the visual perceptive space instead of image space due to the fact that HVS is a limited-bandwidth system which is sensitive to specific spatial frequencies. Our approach was evaluated on seven public IQA benchmark databases using five evaluation criteria. The competitive results achieved demonstrate that our method performs on par with the state-of-the-art approaches.

ADVANTAGES OF PROPOSED SYSTEM:

  • A useful RR-IQA metric is expected to achieve higher prediction accuracy while using less information of the reference image.
  • Competitive performance among state-of-the-arts, consistent performance across different types of distortion, high ratio of accuracy over data rate, moderate and acceptable computational cost.

SYSTEM ARCHITECTURE:

8

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:

  • System : Pentium IV 2.4 GHz.
  • Hard Disk : 40 GB.
  • Floppy Drive : 44 Mb.
  • Monitor : 15 VGA Colour.
  • Mouse :
  • Ram : 512 Mb.

SOFTWARE REQUIREMENTS:

  • Operating system : Windows XP/7.
  • Coding Language : MATLAB
  • Tool : MATLAB R2013A

REFERENCE:

Yong Xu, Delei Liu, Yuhui Quan, and Patrick Le Callet, “Fractal Analysis for Reduced Reference Image Quality Assessment”, IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 24, NO. 7, JULY 2015.