A Complete Processing Chain for Shadow Detection and Reconstruction in VHR Images

A Complete Processing Chain for Shadow Detection and Reconstruction in VHR Images

 

ABSTRACT

          In order to make the image quality we should remove the shadows from the images. The presence of shadows in very high resolution (VHR) images can represent a serious obstacle for their full exploitation. This paper proposes to face this problem as a whole through the proposal of a complete processing chain, which relies on various advanced image processing and pattern recognition tools. The first key point of the chain is that shadow areas are not only detected but also classified to allow their customized compensation. The detection and classification tasks are implemented by means of the state-of-the-art support vector machine approach.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM

The presence of shadows in very high resolution (VHR) images can represent a serious obstacle for their full exploitation. This paper proposes to face this problem as a whole through the proposal of a complete processing chain, which relies on various advanced image processing and pattern recognition tools. The first key point of the chain is that shadow areas are not only detected but also classified to allow their customized compensation.

DISADVANTAGE

  • High spatial resolution entails also some drawbacks like the unsought presence of shadows, particularly in urban areas where there are larger changes in surface elevation and consequently longer shadows.
  • No Image Enhancement.

PROPOSED SYSTEM

In this proposed approach, an alternative method is proposed to solve both problems of detection and reconstruction of shadow areas. Shadow detection is performed through a hierarchical supervised classification scheme, while the proposed reconstruction relies on a linear correlation function, which exploits the information returned by the classification. The whole processing chain includes also two important capabilities: 1) a rejection mechanism to limit as much as possible reconstruction errors and 2) explicit handling of the shadow borders. 1) Gamma correction; 2) histogram matching; and 3) linear correlation [16]. In [2], the authors assume that the surface texture does not radically change when it is shaded. Accordingly, to remove shadows, they perform a contextual texture analysis between a segment of shadow and its neighbours. Knowing the kind of surface under the shadow, a local gamma transformation is then used to restore the shadow area.

ADVANTAGE

  • A quality check mechanism is integrated in order to reduce subsequent misreconstruction problems.
  • Image Enhancement is happening.

MODULES

> Read and write image

   We have to write imread and imwrite functions to read an image from the user. The user need an input image from the user and in a variable assigned. The user can give the image name as well as browse and select also.

> Apply morphological image processing

   We are applying morphological operations on the image. Basically clear the noise from the image. These process will come under post processing and The transition in between shadow and non-shadow areas can raise problems such as boundary ambiguity, color inconstancy, and illumination variation can be calculated. Also in difficult places we are calculating the light sources. Then border reconstruction is done due to the shadow values. This we are doing repeatedly.

> Remove Shadow

          Due to the Shadow we are diving into two types of shadows in the 3D images. In this we are deleting the values of the cast shadow. The self shadow is part of the image. We while we are removing the shadow we should know the shadow type.

> Image Reconstruction

          After removing the cast shadow we have to get back the original image. In this we after removing the cast shadow, we have to construct the image back from the matrix value. that will be done by applying adaptive morphological filters

Online Intrusion Alert Aggregation with Generative Data Stream Modeling

Online Intrusion Alert Aggregation with Generative Data Stream Modeling

ABSTRACT:

Alert aggregation is an important subtask of intrusion detection. The goal is to identify and to cluster different alerts—produced by low-level intrusion detection systems, firewalls, etc.—belonging to a specific attack instance which has been initiated by an attacker at a certain point in time. Thus, meta-alerts can be generated for the clusters that contain all the relevant information whereas the amount of data (i.e., alerts) can be reduced substantially. Meta-alerts may then be the basis for reporting to security experts or for communication within a distributed intrusion detection system. We propose a novel technique for online alert aggregation which is based on a dynamic, probabilistic model of the current attack situation. Basically, it can be regarded as a data stream version of a maximum likelihood approach for the estimation of the model parameters. With three benchmark data sets, we demonstrate that it is possible to achieve reduction rates of up to 99.96 percent while the number of missing meta-alerts is extremely low. In addition, meta-alerts are generated with a delay of typically only a few seconds after observing the first alert belonging to a new attack instance.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

OUR CONTRIBUTION:

The Authors proposed methods on many Intrusion Alerts. As our contribution, we make the system more efficient in identify the intrusion alerts and also we extend this work by sending the Alerts as Message to the Network Administrator who governs the Network or Intrusion Detection System. 

EXISTING SYSTEM 

  • Most existing IDS are optimized to detect attacks with high accuracy. However, they still have various disadvantages that have been outlined in a number of publications and a lot of work has been done to analyze IDS in order to direct future research.
  • Besides others, one drawback is the large amount of alerts produced.
  • Alerts can be given only in System logs.
  • Existing IDS does not have general framework which cannot be customized by adding domain specific knowledge as per the specific requirements of the users or network administrators.

PROPOSED SYSTEM

  • Online Intrusion Alert Aggregation with Generative Data Stream Modeling is a generative modeling approach using probabilistic methods. Assuming that attack instances can be regarded as random processes “producing” alerts, we aim at modeling these processes using approximative maximum likelihood parameter estimation techniques. Thus, the beginning as well as the completion of attack instances can be detected.
  • It is a data stream approach, i.e., each observed alert is processed only a few times. Thus, it can be applied online and under harsh timing constraints.
  • In the proposed scheme of Online Intrusion Alert Aggregation with Generative Data Stream Modeling, we extend our idea of sending Intrusion alerts to the mobile. This makes the process easier and comfortable.

 

  • Online Intrusion Alert Aggregation with Generative Data Stream Modeling does not degrade system performance as individual layers are independent and are trained with only a small number of features, thereby, resulting in an efficient system.
  • Online Intrusion Alert Aggregation with Generative Data Stream Modeling is easily customizable and the number of layers can be adjusted depending upon the requirements of the target network. Our framework is not restrictive in using a single method to detect attacks. Different methods can be seamlessly integrated in our framework to build effective intrusion detectors.
  • Our framework has the advantage that the type of attack can be inferred directly from the layer at which it is detected. As a result, specific intrusion response mechanisms can be activated for different attacks.
HARDWARE REQUIREMENTS
  • SYSTEM : Pentium IV 2.4 GHz
  • HARD DISK             : 40 GB
  • MONITOR             : 15 VGA colour
  • MOUSE : Logitech.
  • RAM             : 256 MB
  • KEYBOARD : 110 keys enhanced.
SOFTWARE REQUIREMENTS
  • Operating system :           Windows XP Professional
  • Front End             :           JAVA, RMI, JDBC, Swing
  • Tool                      :           Eclipse 3.3

MODULES 

  • Server
  • Client
  • DARPA DataSet
  • Mobile
  • Attack Simulation

Server

            Server module is the main module for this project. This module acts as the Intrusion Detection System. This module consists of four layers viz. sensor layer (which detects the user/client etc.), Detection layer, alert processing layer and reaction layer. In addition there is also Message Log, where all the alerts and messages are stored for the references. This Message Log can also be saved as Log file for future references for any network environment.

Client

            Client module is developed for testing the Intrusion Detection System. In this module the client can enter only with a valid user name and password. If an intruder enters with any guessing passwords then the alert is given to the Server and the intruder is also blocked. Even if the valid user enters the correct user name and password, the user can use only for. For example even if the valid user makes the login for repeated number of times, the client will be blocked and the alert is sent to the admin. In the process level intrusion, each client would have given a specific process only. For example, a client may have given permission only for P1process. If the client tries to make more then these processes the client will be blocked and the alert is given by the Intrusion Detection System. In this client module the client can be able to send data. Here, when ever data is sent Intrusion Detection System checks for the file. If the size of the file is large then it is restricted or else the data is sent. 

DARPA Dataset 

This module is integrated in the Server module. This is an offline type of testing the intrusions. In this module, the DARPA Data Set is used to check the technique of the Online Intrusion Alert Aggregation with Generative Data Stream Modeling. The DARPA data set is downloaded and separated according to each layers. So we test the instance of DARPA Dataset using the open file dialog box. Whenever the dataset is chosen based on the conditions specified the Intrusion Detection System works. 

Mobile 

            This module is developed using J2ME. The traditional system uses the message log for storing the alerts. In this system, the system admin or user can get the alerts in their mobile. Whenever alert message received in the message log of the server, the mobile too receives the alert message.

Attack Simulation 

            In this module, the attack simulation is made for ourself to test the system. Attacks are classified and made to simulate here. Whenever an attack is launched the Intrusion Detection System must be capable of detecting it. So our system will also be capable of detecting such attacks. For example if an IP trace attack is launched, the Intrusion Detection System must detect it and must kill or block the process. 

ALGORITHM FOR THE PROPOSED IDS 

Misuse and Anomaly Detection  ALGORITHM:

Step 1: Select the ‘n’ layers needed for the whole IDS.

Step 2: Build Sensor Layer to detect Network and Host Systems.

Step 3: Build Detection Layer based on Misuse and Anomaly detection technique.

Step 4: Classify various types of alerts. (For example alert for System level intrusion or process level intrusion)

Step 5: Code the system for detecting various types of attacks and alerts for respective attacks.

Step 6: Integrate the system with Mobile device to get alerts from the proposed IDS.

Step 7: Specify each type of alert on which category it falls, so that user can easily recognize the attack type.

Step 8: Build Reaction layer with various options so that administrator/user can have various options to select or react on any type of intrusion.

Step 9: Test the system using Attack Simulation module, by sending different attacks to the proposed IDS.

Step 10: Build a log file, so that all the reports generated can be saved for future references.
REFERENCES:
Alexander Hofmann and Bernhard Sick, “Online Intrusion Alert Aggregation with Generative Data Stream Modeling”, IEEE Transactions on Dependable and Secure Computing, Vol. 8, No. 2, March – April 2011.

A New Scalable Hybrid RoutingProtocol for VANETs

A New Scalable Hybrid RoutingProtocol for VANETs

ABSTRACT:

Vehicular ad hoc networks (VANETs) are highlymobile wireless networks that are designed to support vehicularsafety, traffic monitoring, and other commercial applications.Within VANETs, vehicle mobility will cause the communicationlinks between vehicles to frequently be broken. Such link failuresrequire a direct response from the routing protocols, leadingto a potentially excessive increase in the routing overhead anddegradation in network scalability. In this paper, we propose anew hybrid location-based routing protocol that is particularlydesigned to address this issue. Our new protocol combines featuresof reactive routing with location-based geographic routing in amanner that efficiently uses all the location information available.The protocol is designed to gracefully exit to reactive routing as thelocation information degrades.We show through analysis and simulationthat our protocol is scalable and has an optimal overhead,even in the presence of high location errors. Our protocol providesan enhanced yet pragmatic location-enabled solution that can bedeployed in all VANET-type environments.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram : 1 GB

SOFTWARE REQUIREMENTS:

  • Operating system : Windows XP/UBUNTU.
  • Implementation : NS2
  • NS2 Version : 2.28
  • Front End : OTCL (Object Oriented Tool Command  Language)
  • Tool : Cygwin (To simulate in Windows OS)

REFERENCE:

Mohammad Al-Rabayah and Robert Malaney, Member, IEEE, “A New Scalable Hybrid RoutingProtocol for VANETs”, IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, VOL. 61, NO. 6, JULY 2012.

A Cluster Allocation and Routing Algorithm based on Node Density for Extending the Lifetime of Wireless Sensor Networks

A Cluster Allocation and Routing Algorithm based on Node Density for Extending the Lifetime of Wireless Sensor Networks

ABSTRACT:

The electricity of sensor nodes in wireless sensornetworks is very limited, so it is an important research topic todeploy the sensor nodes and cooperate with an efficientrouting algorithm for extending the network lifetime. In therelated research, LEACH routing algorithm randomly selectscluster heads in each round to form a cluster network, whichmay cause additional power consumption and inability tomaintain the optimal routes for data transmission. The clusterallocation and routing algorithm proposed in this study isbased on the cluster architecture of LEACH, and the objectiveis to produce clusters with more sensor nodes to balanceenergy consumption of cluster head. For indirect-transmissionrouting algorithms, the sensor nodes near the base station mayconsume more energy due to a larger amount of datatransmission. Therefore, this study proposed to increase thenode density near the base station during deployment tocompensate for the requirement of high energy consumption.The experimental results show that the proposed algorithmbased on node density distribution can efficiently increase thelifetime of wireless sensor networks.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram : 1 GB

SOFTWARE REQUIREMENTS:

  • Operating system : Windows XP/UBUNTU.
  • Implementation : NS2
  • NS2 Version : 2.28
  • Front End : OTCL (Object Oriented Tool Command  Language)
  • Tool : Cygwin (To simulate in Windows OS)

REFERENCE:

Bo-Si Lee, Hao-Wei Lin, WernhuarTarng, “A Cluster Allocation and Routing Algorithm based on Node Density for Extending the Lifetime of Wireless Sensor Networks”, IEEE 2012.

 

A Block-Based Pass-Parallel SPIHT Algorithm

A Block-Based Pass-Parallel SPIHT Algorithm

 

ABSTRACT:

Set-partitioning in hierarchical trees (SPIHT) is a widely used compression algorithm for wavelet-transformed images. One of its main drawbacks is a slow processing speed due to its dynamic processing order that depends on the image contents. To overcome this drawback, this paper presents a modified SPIHT algorithm called block-based pass-parallel SPIHT (BPS). BPS decomposes a wavelet-transformed image into 4 × 4 blocks and simultaneously encodes all the bits in a bit-plane of a 4 × 4 block. To exploit parallelism, BPS reorganizes the three passes of the original SPIHT algorithm and then BPS encodes/decodes the reorganized three passes in a parallel and pipelined manner. The pre calculation of the stream length of each pass enables the parallel and pipelined execution of these three passes by not only an encoder but also a decoder. The modification of the processing order slightly degrades the compression efficiency. Experimental results show that the peak signal-to-noise ratio loss by BPS is between approximately 0.23 and 0.59 dB when compared to the original SPIHT algorithm. Both an encoder and a decoder are implemented in the hardware that can process 120 million samples per second at an operating clock frequency of 100 MHz. This processing speed allows a video of size of 1920 × 1080 in the 4:2:2 format to be processed at the rate of 30 frames/s. The gate count of the hardware is about 43.9K.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

Set-partitioning in hierarchical trees (SPIHT) is a widely used compression algorithm for wavelet-transformed images.  The original SPIHT algorithm processes wavelet coefficients in a dynamic order that depends on the values of the coefficients. Thus, it is not easy to process multiple coefficients in parallel; and consequently, it is difficult to improve the throughput of the original SPIHT. One of its main drawbacks is a slow processing speed due to its dynamic processing order that depends on the image contents.

PROPOSED SYSTEM:

To overcome the drawback, this paper presents a modified SPIHT algorithm called block-based pass-parallel SPIHT (BPS). BPS decomposes a wavelet-transformed image into 4 × 4 blocks and simultaneously encodes all the bits in a bit-plane of a 4 × 4 block.

HARDWARE REQUIREMENTS

  • Processor : Any Processor above 500 MHz.
  • Ram : 128Mb.
  • Hard Disk : 10 GB.
  • Compact Disk : 650 Mb.
  • Input device : Standard Keyboard and Mouse.
  • Output device : VGA and High Resolution Monitor

SOFTWARE REQUIREMENTS

  • Operating System :  Windows XP.
  • Coding Language :  MATLAB

REFERENCE:

Yongseok Jin, Member, IEEE, and Hyuk-Jae Lee, “A Block-Based Pass-Parallel SPIHT Algorithm”, IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 22, NO. 7, JULY 2012.