Fast and Effective Image Copy-Move Forgery Detection via Hierarchical Feature Point Matching

Fast and Effective Image Copy-Move Forgery Detection via Hierarchical Feature Point Matching

ABSTRACT:

Copy-move forgery is one of the most commonly used manipulations for tampering digital images. Keypoint-based detection methods have been reported to be very effective in revealing copy-move evidences, due to their robustness against various attacks, such as large-scale geometric transformations. However, these methods fail to handle the cases when copy-move forgeries only involve small or smooth regions, where the number of keypoints is very limited. To tackle this challenge, we propose a fast and effective copy-move forgery detection algorithm through hierarchical feature point matching. We first show that it is possible to generate a sufficient number of keypoints that exist even in small or smooth regions, by lowering the contrast threshold and rescaling the input image. We then develop a novel hierarchical matching strategy to solve the keypoint matching problems over a massive number of keypoints. To reduce the false alarm rate and accurately localize the tampered regions, we further propose a novel iterative localization technique by exploiting the robustness properties (including the dominant orientation and the scale information) and the color information of each keypoint. Extensive experimental results are provided to demonstrate the superior performance of our proposed scheme, in terms of both efficiency and accuracy.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram :1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : MATLAB R2013A /2018

REFERENCE:

Yuanman Li, Student Member, IEEE and Jiantao Zhou, Member, IEEE, “Fast and Effective Image Copy-Move Forgery Detection via Hierarchical Feature Point Matching”, IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2019.

Convolution Structure Sparse Coding for Fusion of Panchromatic and Multispectral Images

Convolution Structure Sparse Coding for Fusion of Panchromatic and Multispectral Images

ABSTRACT:

Recently, sparse coding-based image fusion methods have been developed extensively. Although most of them can produce competitive fusion results, three issues need to be addressed: 1) these methods divide the image into overlapped patches and process them independently, which ignore the consistency of pixels in overlapped patches; 2) the partition strategy results in the loss of spatial structures for the entire image; and 3) the correlation in the bands of multispectral (MS) image is ignored. In this paper, we propose a novel image fusion method based on convolution structure sparse coding (CSSC) to deal with these issues. First, the proposed method combines convolution sparse coding with the degradation relationship of MS and panchromatic (PAN) images to establish a restoration model. Then, CSSC is elaborated to depict the correlation in the MS bands by introducing structural sparsity. Finally, feature maps over the constructed high-spatial-resolution (HR) and low-spatial-resolution (LR) filters are computed by alternative optimization to reconstruct the fused images. Besides, a joint HR/LR filter learning framework is also described in detail to ensure consistency and compatibility of HR/LR filters. Owing to the direct convolution on the entire image, the proposed CSSC fusion method avoids the partition of the image, which can efficiently exploit the global correlation and preserve the spatial structures in the image. The experimental results on QuickBird and Geoeye-1 satellite images show that the proposed method can produce better results by visual and numerical evaluation when compared with several well-known fusion methods.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram :1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : MATLAB R2013A /2018

REFERENCE:

Kai Zhang , Min Wang , Shuyuan Yang , Senior Member, IEEE, and Licheng Jiao, Fellow, IEEE, “Convolution Structure Sparse Coding for Fusion of Panchromatic and Multispectral Images”, IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2019.

Characterizing and evaluating adversarial examples for Offline Handwritten Signature Verification

Characterizing and evaluating adversarial examples for Offline Handwritten Signature Verification

ABSTRACT:

The phenomenon of Adversarial Examples is attracting increasing interest from the Machine Learning community, due to its significant impact to the security of Machine Learning systems. Adversarial examples are similar (from a perceptual notion of similarity) to samples from the data distribution, that “fool” a machine learning classifier. For computer vision applications, these are images with carefully crafted but almost imperceptible changes, that are misclassified. In this work, we characterize this phenomenon under an existing taxonomy of threats to biometric systems, in particular identifying new attacks for Offline Handwritten Signature Verification systems. We conducted an extensive set of experiments on four widely used datasets: MCYT-75, CEDAR, GPDS-160 and the Brazilian PUC-PR, considering both a CNN-based system and a system using a handcrafted feature extractor (CLBP). We found that attacks that aim to get a genuine signature rejected are easy to generate, even in a limited knowledge scenario, where the attacker does not have access to the trained classifier nor the signatures used for training. Attacks that get a forgery to be accepted are harder to produce, and often require a higher level of noise – in most cases, no longer “imperceptible” as previous findings in object recognition. We also evaluated the impact of two countermeasures on the success rate of the attacks and the amount of noise required for generating successful attacks.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram :1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : MATLAB R2013A /2018

REFERENCE:

Luiz G. Hafemann, Robert Sabourin, Member, IEEE, and Luiz S. Oliveira, “Characterizing and evaluating adversarial examples for Offline Handwritten Signature Verification”, IEEE 2019.

Perceptual Video Hashing for Content Identification and Authentication

Perceptual Video Hashing for Content Identification and Authentication

ABSTRACT:

Perceptual hashing has been broadly used in the literature to identify similar contents for video copy detection. It has also been adopted to detect malicious manipulations for video authentication. However, targeting both applications with a single system using the same hash would be highly desirable as this saves the storage space and reduces the computational complexity. This paper proposes a perceptual video hashing system for content identification and authentication. The objective is to design a hash extraction technique that can withstand signal processing operations on one hand and detect malicious attacks on the other hand. The proposed system relies on a new signal calibration technique for extracting the hash using the discrete cosine transform (DCT) and the discrete sine transform (DST). This consists of determining the number of samples, called the normalizing shift, that is required for shifting a digital signal so that the shifted version matches a certain pattern according to DCT/DST coefficients. The rationale for the calibration idea is that the normalizing shift resists signal processing operations while it exhibits sensitivity to local tampering (i.e., replacing a small portion of the signal with a different one). While the same hash serves both applications, two different similarity measures have been proposed for video identification and authentication, respectively. Through intensive experiments with various types of video distortions and manipulations, the proposed system has been shown to outperform related state-of-the art video hashing techniques in terms of identification and authentication with the advantageous ability to locate tampered regions.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram :1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : MATLAB R2013A /2018

REFERENCE:

Fouad Khelifi, Member, IEEE, and Ahmed Bouridane, Senior Member, IEEE , “Perceptual Video Hashing for Content Identification and Authentication”, IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2019.

A PatchMatch-based Dense-field Algorithm for Video Copy-Move Detection and Localization

A PatchMatch-based Dense-field Algorithm for Video Copy-Move Detection and Localization

ABSTRACT:

We propose a new algorithm for the reliable detection and localization of video copy-move forgeries. Discovering well crafted video copy-moves may be very difficult, especially when some uniform background is copied to occlude foreground objects. To reliably detect both additive and occlusive copy moves we use a dense-field approach, with invariant features that guarantee robustness to several post-processing operations. To limit complexity, a suitable video-oriented version of PatchMatch is used, with a multi-resolution search strategy, and a focus on volumes of interest. Performance assessment relies on a new dataset, designed ad hoc, with realistic copy-moves and a wide variety of challenging situations. Experimental results show the proposed method to detect and localize video copy-moves with good accuracy even in adverse conditions.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram :1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : MATLAB R2013A /2018

REFERENCE:

Luca D’Amiano, Davide Cozzolino, Giovanni Poggi, and Luisa Verdoliva, “A PatchMatch-based Dense-field Algorithm for Video Copy-Move Detection and Localization”, IEEE 2019.

A Machine Vision Technique for Grading of Harvested Mangoes based on Maturity and Quality

A Machine Vision Technique for Grading of Harvested Mangoes based on Maturity and Quality

ABSTRACT:

In agricultural and food industry the proper grading of fruits is very important to increase the profitability. In this paper, a scheme for automated grading of mango (Mangifera Indica L.) according to maturity level in terms of actual-days-to-rot and quality attributes like size, shape and surface defect has been proposed. The proposed scheme works on intelligent machine vision based techniques for grading of mangoes in four different categories, which are determined on the basis of market distance and market value. In this system video image is captured by a CCD (Charge Couple Device) camera placed on the top of a conveyer belt carrying mangoes, thereafter several image processing techniques are applied to collect features which are sensitive to the maturity and quality. For maturity prediction in terms of actual-days-to-rot Support Vector Regression (SVR) has been employed and for estimation of quality from the quality attributes, Multi Attribute Decision Making (MADM) system have been adopted. Finally fuzzy incremental learning algorithm has been used for grading based on maturity and quality. The performance accuracy achieved using this proposed system for grading of mango fruit is nearly 87%. Moreover, the repeatability of the proposed system is found to be 100%.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram :1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : MATLAB R2013A /2018

REFERENCE:

Chandra Sekhar Nandi, Bipan Tudu, and Chiranjib Koley, Member, IEEE, “A Machine Vision Technique for Grading of Harvested Mangoes based on Maturity and Quality”, IEEE 2019.

LECARM: Low-Light Image Enhancement using Camera Response Model

LECARM: Low-Light Image Enhancement using Camera Response Model

ABSTRACT:

Low-light image enhancement algorithms can improve the visual quality of low-light images and support the extraction of valuable information for some computer vision techniques. However, existing techniques inevitably introduce color and lightness distortions when enhancing the images. To lower the distortions, we propose a novel enhancement framework using the response characteristics of cameras. First, we discuss how to determine a reasonable camera response model and its parameters. Then we use the illumination estimation techniques to estimate the exposure ratio for each pixel. Finally, the selected camera response model is used to adjust each pixel to the desired exposure according to the estimated exposure ratio map. Experiments show that our method can obtain enhancement results with fewer color and lightness distortions compared with several state-of-the-art methods.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram :1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : MATLAB R2013A /2018

REFERENCE:

Yurui Ren, Student Member, IEEE, Zhenqiang Ying, Student Member, IEEE,

Thomas H.Li, and Ge Li, Member, IEEE, “LECARM: Low-Light Image Enhancement using Camera Response Model”, IEEE 2019.

Contrast in Haze Removal: Configurable Contrast Enhancement Model Based on Dark Channel Prior

Contrast in Haze Removal: Configurable Contrast Enhancement Model Based on Dark Channel Prior

ABSTRACT:

Conventional haze-removal methods are designed to adjust the contrast and saturation, and in so doing enhance the quality of the reconstructed image. Unfortunately, the removal of haze in this manner can shift the luminance away from its ideal value. In other words, haze removal involves a tradeoff between luminance and contrast. We reformulated the problem of haze removal as a luminance reconstruction scheme, in which an energy term is used to achieve a favorable tradeoff between luminance and contrast. The proposed method bases the luminance values for the reconstructed image on statistical analysis of haze-free images, thereby achieving contrast values superior to those obtained using other methods for a given brightness level. We also developed a novel module for the estimation of atmospheric light using the color constancy method. This module was shown to outperform existing methods, particularly when noise is taken into account. The proposed framework requires only 0.55 seconds to process a 1-megapixel image. Experimental results demonstrate that the proposed haze-removal framework conforms to our theory of contrast.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram :1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : MATLAB R2013A /2018

REFERENCE:

Ping-Juei Liu, Shi-Jinn Horng*, Jzau-Sheng Lin, and Tianrui Li, “Contrast in Haze Removal: Configurable Contrast Enhancement Model Based on Dark Channel Prior”, IEEE 2019.

An Adaptive Method for Image Dynamic Range Adjustment

An Adaptive Method for Image Dynamic Range Adjustment

ABSTRACT:

In this paper, we relate the operation of image dynamic range adjustment to the following two tasks: (1) for a high dynamic range (HDR) image, its dynamic range will be mapped to the available dynamic range of display devices, and (2) for a low dynamic range (LDR) image, its distribution of intensity will be extended to adequately utilize the full dynamic range of display devices. The common goal of the both tasks is to preserve or even enhance the details and improve the visibility of scenes when being matched to the available dynamic range of a display device. In this study, we propose an efficient method for image dynamic range adjustment with three adaptive steps. Firstly, according to the histogram of the luminance map separated from the given RGB image, two suitable Gamma functions are adaptively selected to separately adjust the luminance of the dark and bright components. Secondly, an adaptive fusion strategy is proposed to combine the two adjusted luminance maps in order to balance the enhancement of the details in different regions. Thirdly, an adaptive luminance-dependant color restoration method is designed to combine the fused luminance map with the original color components to obtain more consistent color saturation between the images before and after dynamic range adjustment. Extensive experiments show that the proposed method can efficiently compress the dynamic range of HDR scenes with good contrast, clear details and high structural fidelity of the original image appearance. In addition, the proposed method can also obtain promising performance when being used to enhance LDR nighttime images and greatly facilitate the object (car) detection in nighttime traffic scenes.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram :1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : MATLAB R2013A /2018

REFERENCE:

Kai-Fu Yang, Hui Li, Hulin Kuang, Chao-Yi Li, and Yong-Jie Li, Senior Member, IEEE, “An Adaptive Method for Image Dynamic Range Adjustment”, IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2019.

A Fast Image Contrast Enhancement Algorithm Using Entropy-Preserving Mapping Prior

A Fast Image Contrast Enhancement Algorithm Using Entropy-Preserving Mapping Prior

ABSTRACT:

Contrast enhancement is a crucial image processing step for image quality control. However, images enhanced by conventional contrast enhancement methods can have negative effects on the performance of image quality control. The most commonly observed effects are over- and under-enhancement effects on images, which cause significant loss of fine textures in images. This study developed a new contrast enhancement algorithm based on an entropy-preserving mapping prior that improves on conventional contrast enhancement methods. By creating a closed-form solution for enhancing the image contrast under this novel prior and learning the coefficients of the solution using an unsupervised learning strategy, an image’s contrast and texture can be effectively recovered. The experimental results verify that our proposed method clearly outperforms the existing state-of-the-art methods in terms of both quantitative estimation and qualitative human visual inspection.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram :1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : MATLAB R2013A /2018

REFERENCE:

Bo-Hao Chen, Member, IEEE, Yu-Ling Wu, and Ling-Feng Shi, “A Fast Image Contrast Enhancement Algorithm Using Entropy-Preserving Mapping Prior”, IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2019.