Secure Data Aggregation in Wireless Sensor Networks: Filtering out the Attacker’s Impact

Secure Data Aggregation in Wireless Sensor Networks: Filtering out the Attacker’s Impact

ABSTRACT:

Wireless sensor networks (WSNs) are increasingly used in many applications, such as volcano and fire monitoring, urban sensing, and perimeter surveillance. In a large WSN, in-network data aggregation (i.e., combining partial results at intermediate nodes during message routing) significantly reduces the amount of communication overhead and energy consumption. The research community proposed a loss-resilient aggregation framework called synopsis diffusion, which uses duplicate insensitive algorithms on top of multipath routing schemes to accurately compute aggregates (e.g., predicate count or sum). However, this aggregation framework does not address the problem of false sub-aggregate values contributed by compromised nodes. This attack may cause large errors in the aggregate computed at the base station, which is the root node in the aggregation hierarchy. In this paper, we make the synopsis diffusion approach secure against the above attack launched by compromised nodes. In particular, we present an algorithm to enable the base station to securely compute predicate count or sum even in the presence of such an attack. Our attack-resilient computation algorithm computes the true aggregate by filtering out the contributions of compromised nodes in the aggregation hierarchy. Extensive analysis and simulation study show that our algorithm outperforms other existing approaches.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

  • To address the communication loss problem in tree-based algorithms an aggregation framework called synopsis diffusion is designed, which computes Count and Sum using a ring topology. Very similar algorithms are independently proposed. These works use duplicate-insensitive algorithms for computing aggregates based algorithm for counting distinct elements in a multi-set.
  • Several secure aggregation algorithms have been proposed assuming that the BS is the only aggregator node in the network. These works did not consider in-network aggregation. Only recently, the research community has been paying attention to the security issues of hierarchical aggregation.

DISADVANTAGES OF EXISTING SYSTEM:

  • A sensor node has limitation in terms of computation capability and energy reserves.
  • Method is prohibitively expensive in terms of communication overhead.
  • The possibility of node compromise introduces more challenges because most of the existing in-network aggregation algorithms have no provisions for security.
  • A compromised node might attempt to thwart the aggregation process by launching several attacks, such as eavesdropping, jamming, message dropping, message fabrication, and so on.

PROPOSED SYSTEM:

  • This paper focuses on a subclass of these attacks in which the adversary aims to cause the BS to derive an incorrect aggregate. By relaying a false sub-aggregate to the parent node, a compromised node may contribute a large amount of error to the aggregate.
  • In this paper, we design an algorithm to securely compute aggregates, such as Count and Sum despite the falsified sub-aggregate attack. In particular, our algorithm which we call the attack-resilient computation algorithm consists of two phases.
  • The main idea is as follows: (i) In the first phase, the BS derives a preliminary estimate of the aggregate based on minimal authentication information received from the nodes. (ii) In the second phase, the BS demands more authentication information from only a subset of nodes while this subset is determined by the estimate of the first phase. At the end of the second phase, the BS can (locally) filter out the false contributions of the compromised nodes from the aggregate.
  • The key observation which we exploit to minimize the communication overhead is that to verify the correctness of the final synopsis (representing the aggregate of the whole network) the BS does not need to receive authentication messages from all of the nodes.

ADVANTAGES OF PROPOSED SYSTEM:

  • We make the synopsis diffusion approach secure against the falsified sub-aggregate attack.
  • Our algorithm outperforms other existing approaches in several metrics, such as the communication overhead. We declare that the communication overhead of our algorithm might be higher if the assumption about compromised nodes being uniformly distributed does not hold.

SYSTEM ARCHITECTURE:

MODULES:

  • Setting up Network Model
  • Falsifying the local value
  • Computing Sum Despite Attacks
  • Performance Analysis

MODULES DESCRIPTION:

Setting up Network Model

Our first module is setting up the network model. We consider a large-scale, homogeneous sensor network consisting of resource-constrained sensor nodes. Analogous to previous distributed detection approaches; we assume that an identity-based public-key cryptography facility is available in the sensor network. Prior to deployment, each legitimate node is allocated a unique ID and a corresponding private key by a trusted third party. The public key of a node is its ID, which is the essence of an identity-based cryptosystem. Consequently, no node can lie to others about its identity. Moreover, anyone is able to verify messages signed by a node using the identity-based key. The source nodes in our problem formulation serve as storage points which cache the data gathered by other nodes and periodically transmit to the sink, in response to user queries. Such network architecture is consistent with the design of storage centric sensor networks

Falsifying the local value:

A compromised node C can falsify its own sensor reading with the goal of influencing the aggregate value. We assume that if a node is compromised, all the information it holds will be compromised. We conservatively consider that all malicious nodes can collude or can be under the control of a single attacker. We use a Byzantine fault model, where the adversary can inject any message through the compromised nodes. Compromised nodes may behave in arbitrarily malicious ways, which means that the sub-aggregate of a compromised node can be arbitrarily generated. However, we assume that the attacker does not launch DoS attacks, e.g., the multi-hop flooding attacks with the goal of making the whole system unavailable.

Computing Sum Despite Attacks:

In this module, we develop an attack-resilient protocol which enables BS to compute the aggregate despite the presence of the attack. We observe that, in general, BS can verify the final synopsis if it receives one valid MAC for each ‘1’ bit in the synopsis. In fact, to verify a particular ‘1’ bit, say bit i , BS does not need to receive authentication messages from all of the nodes which contribute to bit i . As an example, more than half of the nodes are likely to contribute to the leftmost bit of the synopsis, while to verify this bit, BS needs to receive a MAC only from one of these nodes.

Performance Analysis

For the proposed system protocol, we use the following specific measurements to evaluate its performance:

  • Deviation of Estimate
  • Number of (Unique) MACs
  • Average Nodes Sent bits

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System :         Pentium IV 2.4 GHz.
  • Hard Disk :         40 GB.
  • Floppy Drive : 44 Mb.
  • Monitor : 15 VGA Colour.
  • Mouse :
  • Ram : 512 Mb.

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows XP/7/LINUX.
  • Implementation : NS2
  • NS2 Version : 2.28
  • Front End : OTCL (Object Oriented Tool Command Language)
  • Tool : Cygwin (To simulate in Windows OS)

REFERENCE:

Sankardas Roy, Member, IEEE, Mauro Conti, Member, IEEE, Sanjeev Setia, and Sushil Jajodia, Fellow, IEEE, “Secure Data Aggregation in Wireless Sensor Networks: Filtering out the Attacker’s Impact”, IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 9, NO. 4, APRIL 2014.

A Model Approach to the Estimation of Peer-to-Peer Traffic Matrices

A Model Approach to the Estimation of Peer-to-Peer Traffic Matrices

ABSTRACT:

Peer-to-Peer (P2P) applications have witnessed an increasing popularity in recent years, which brings new challenges to network management and traffic engineering (TE). As basic input information, P2P traffic matrices are of significant importance for TE. Because of the excessively high cost of direct measurement, many studies aim to model and estimate general traffic matrices, but few focus on P2P traffic matrices. In this paper, we propose a model to estimate P2P traffic matrices in operational networks. Important factors are considered, including the number of peers, the localization ratio of P2P traffic, and the network distance. Here, the distance can be measured with AS hop counts or geographic distance. To validate our model, we evaluate its performance using traffic traces collected from both the real P2P video-on-demand (VoD) and file-sharing applications. Evaluation results show that the proposed model outperforms the other two typical models for the estimation of the general traffic matrices in several metrics, including spatial and temporal estimation errors, stability in the cases of oscillating and dynamic flows, and estimation bias. To the best of our knowledge, this is the first research on P2P traffic matrices estimation. P2P traffic matrices, derived from the model, can be applied to P2P traffic optimization and other TE fields.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

Researchers have proposed a variety of methods and models in recent years to make a more convenient and precise estimation. In both the methods and the models are well summarized. These works mainly focus on the estimation of matrices for general traffic regardless of the type of traffic carried over the network.

DISADVANTAGES OF EXISTING SYSTEM:

The large volume of P2P traffic significantly increases the load on the Internet, making networks more vulnerable to congestion and failure, and hence brings new challenges to the efficiency and fairness of networks.

Existing models designed for general traffic (e.g., the gravity model) fail to capture the features of P2P traffic, leading to undesirable estimation errors for P2P traffic.

PROPOSED SYSTEM:

In this paper, we propose a model to estimate P2P traffic matrices based on a close analysis of the traffic characteristics in P2P systems. To capture the critical properties of the P2P traffic, we take the following physically meaningful factors into consideration. Firstly, the number of peers is considered because, intuitively, networks with more peers might have larger volumes of P2P traffic. Another factor is the traffic localization ratio, which covers the internally exchanged portion of P2P traffic. Last but not least, the distance between different networks is also considered, which can precisely reflect the peer selection strategy of the concerned system.

Using real P2P traffic datasets derived from a P2P video on-demand (VoD) system and a P2P file-sharing application, we explore how parameters in the P2P model affect the estimation accuracy. To the best of our knowledge, this is the first work that deals with the estimation of P2P traffic matrices. Therefore, we also evaluate the estimation accuracy of our model through a comparison with two typical models proposed for general traffic matrices, namely the gravity model and the independent connection (IC) model. Evaluation results show that the newly proposed P2P model outperforms the other two models in several metrics, including spatial and temporal estimation errors, stability in the cases of oscillating and dynamic flows and estimation bias.

ADVANTAGES OF PROPOSED SYSTEM:

We argue that a model designed especially for estimating P2P traffic is needed and greatly useful. Existing models designed for general traffic (e.g., the gravity model ) fail to capture the features of P2P traffic, leading to undesirable estimation errors for P2P traffic.

SYSTEM ARCHITECTURE:

BLOCK DIAGRAM:

MODULES:

  • Neighbor selection
  • Data request and Data Transmission
  • Traffic matrices

MODULES DESCSRIPTION:

Neighbor selection

In the neighbor selection phase, a peer newly in the system registers in a centralized server named tracker and retrieves a list of partial peers in the same swarm, which is a group of peers interested in the same file. In the mainstream implementation of trackers, peers in the list are selected randomly without any bias. But recently, many researchers focus on improving locality in this phase, and prefer to select the neighbors closer to the requester, such as P4P. The network distance is either measured by peers themselves or provided by ISP-operated services.

Data request and Data transmission

In Data request phase, the downloading peer will send data requests to its neighbors on the list. According to the default setting in BitTorrent, a peer can only concurrently upload data to at most 4 downloading peers, and will reject all received requests when in full uploading service. Leechers will prefer to respond to the data requests from the peers who have uploaded to them before, while free-riders will reject the majority of the received data requests. Connections are set up between a host and each of its neighbors who have accepted data requests, and then the data transmission phase begins.

Traffic matrices

We define basic P2P traffic matrices as traffic matrices reflecting traffic volumes among individual peers. The basic P2P traffic matrices are difficult to estimate, because individual peers dynamically join and leave the P2P system. To simplify the analysis, we assume that peers remain stable within a certain time interval t, and build up a probability model for basic P2P traffic matrices.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System :         Pentium IV 2.4 GHz.
  • Hard Disk :         40 GB.
  • Floppy Drive : 44 Mb.
  • Monitor : 15 VGA Colour.
  • Mouse :
  • Ram : 512 Mb.

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows XP/7/LINUX.
  • Implementation : NS2
  • NS2 Version : 2.28
  • Front End : OTCL (Object Oriented Tool Command Language)
  • Tool : Cygwin (To simulate in Windows OS)

REFERENCE:

Ke Xu, Senior Member, IEEE, Meng Shen, Yong Cui, Member, IEEE, Mingjiang Ye, and Yifeng Zhong, “A Model Approach to the Estimation of Peer-to-Peer Traffic Matrices”, IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 25, NO. 5, MAY 2014.

Hardware implementation of Elliptic Curve Digital Signature Algorithm (ECDSA) on Koblitz Curves

Hardware implementation of Elliptic Curve Digital Signature Algorithm (ECDSA) on Koblitz Curves

ABSTRACT:

This paper presents Elliptic Curve Digital Signature Algorithm (ECDSA) hardware implementation over Koblitz subfield curves with 163-bit key length. We designed ECDSA with the purpose to improve performance and security respectively by using elliptic curve point multiplication on Koblitz curves to compute the public key and a key stream generator “W7” to generate private key. Different blocs of ECDSA are implemented on a reconfigurable hardware platform (Xilinx xc6vlx760-2ff1760). We used the hardware description language VHDL (VHSIC Hardware Description Language) for compartmental validation. The design requires 0.2 ms, 0.8 ms and 0.4 ms with 7 %, 13 % and 5 % of the device resources on Slice LUT for respectively key generation, signature generation and signature verification. The proposed ECDSA implementation is suitable to the applications that need: low-bandwidth communication, low-storage and low-computation environments. In particular our implementation is suitable to smart cards and wireless devices.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram : 1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows XP/UBUNTU.
  • Implementation : NS2
  • NS2 Version : 2.28
  • Front End : OTCL (Object Oriented Tool Command  Language)
  • Tool : Cygwin (To simulate in Windows OS)

REFERENCE:

Ghanmy Nabil, Khlif Naziha, Fourati Lamia, Kamoun Lotfi, “Hardware implementation of Elliptic Curve Digital Signature Algorithm (ECDSA) on Koblitz Curves”, IEEE 2013.

ProHet: A Probabilistic Routing Protocol with Assured Delivery Rate in Wireless Heterogeneous Sensor Networks

ProHet: A Probabilistic Routing Protocol with Assured Delivery Rate in Wireless Heterogeneous Sensor Networks

ABSTRACT:

Due to different requirements in applications, sensors with different capacities are deployed. How to design efficient, reliable and scalable routing protocols in such wireless heterogeneous sensor networks (WHSNs) with intermittent asymmetric links is a challenging task. In this paper, we propose ProHet: a distributed probabilistic routing protocol for WHSNs that utilizes asymmetric links to reach assured delivery rate with low overhead. The ProHet protocol first produces a bidirectional routing abstraction by finding a reverse path for every asymmetric link. Then, it uses a probabilistic strategy to choose forwarding nodes based on historical statistics using local information. Analysis shows that ProHet can achieve assured delivery rate ρ if ρ is set within its upper-bound. Extensive simulations are conducted to verify its efficiency.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram : 1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows XP/UBUNTU.
  • Implementation : NS2
  • NS2 Version : 2.28
  • Front End : OTCL (Object Oriented Tool Command  Language)
  • Tool : Cygwin (To simulate in Windows OS)

REFERENCE:

Xiao Chen, Zanxun Dai, Wenzhong Li, Yuefei Hu, Jie Wu, Hongchi Shi, and Sanglu Lu, “ProHet: A Probabilistic Routing Protocol with Assured Delivery Rate in Wireless Heterogeneous Sensor Networks”, IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 12, NO. 4, APRIL 2013.

A Highly Scalable Key Pre-Distribution Scheme for Wireless Sensor Networks

A Highly Scalable Key Pre-Distribution Scheme for Wireless Sensor Networks

ABSTRACT:

Given the sensitivity of the potential WSN applications and because of resource limitations, key management emerges as a challenging issue for WSNs. One of the main concerns when designing a key management scheme is the network scalability. Indeed, the protocol should support a large number of nodes to enable a large scale deployment of the network. In this paper, we propose a new scalable key management scheme for WSNs which provides a good secure connectivity coverage. For this purpose, we make use of the unital design theory. We show that the basic mapping from unitals to key pre-distribution allows us to achieve high network scalability. Nonetheless, this naive mapping does not guarantee a high key sharing probability. Therefore, we propose an enhanced unital-based key pre-distribution scheme providing high network scalability and good key sharing probability approximately lower bounded by 1 e  0.632. We conduct approximate analysis and simulations and compare our solution to those of existing methods for different criteria such as storage overhead, network scalability, network connectivity, average secure path length and network resiliency. Our results show that the proposed approach enhances the network scalability while providing high secure connectivity coverage and overall improved performance. Moreover, for an equal network size, our solution reduces significantly the storage overhead compared to those of existing solutions.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

Wireless sensor networks (WSNs) are increasingly used in critical applications within several fields including military, medical and industrial sectors. Given the sensitivity of these applications, sophisticated security services are required. Key management is a corner stone for many security services such as confidentiality and authentication which are required to secure communications in WSNs. The establishment of secure links between nodes is then a challenging problem in WSNs. Because of resource limitations, symmetric key establishment is one of the most suitable paradigms for securing exchanges in WSNs. On the other hand, because of the lack of infrastructure in WSNs, we have usually no trusted third party which can attribute pair wise secret keys to neighboring nodes, that is why most existing solutions are based on key pre-distribution.

DISADVANTAGES OF EXISTING SYSTEM:

A host of research work dealt with symmetric key pre-distribution issue for WSNs and many solutions have been proposed In the existing system many disadvantages occur: the design of key rings (blocks of keys) is strongly related to the network size, these solutions either suffer from low scalability (number of supported nodes), or degrade other performance metrics including secure connectivity, storage overhead and resiliency in the case of large networks.

PROPOSED SYSTEM:

In this proposed system, our aim is to tackle the scalability issue without degrading the other network performance metrics. For this purpose, we target the design of a scheme which ensures a good secure coverage of large scale networks with a low key storage overhead and a good network resiliency. To this end, we make use, of the unital design theory for efficient WSN key pre-distribution.

ADVANTAGES OF PROPOSED SYSTEM:

The advantages of the proposed system as follows:

  • We propose a naive mapping from unital design to key pre-distribution and we show through analytical analysis that it allows to achieve high scalability.
  • We propose an enhanced unitalbased key pre-distribution scheme that maintains a good key sharing probability while enhancing the network scalability.
  • We analyze and compare our new approach against main existing schemes, with respect to different criteria: storage overhead, energy consumption, network scalability, secure connectivity coverage, average secure path length and network resiliency.

SYSTEM ARCHITECTURE:

BLOCK DIAGRAM:


MODULES:

  1. Node Deployment
  2. Key Generation
  3. Key Pre-distribution Technique
  4. Secure Transmission with Energy

MODULES DESCRIPTION:

Node Deployment

The first module is Node deployment, where the node can be deployed by specifying the number of nodes in the network. After specifying the number of nodes in the network, the nodes are deployed. The nodes are deployed with unique ID (Identity) number so that each can be differentiated. And also nodes are deployed with their energy levels.

Key Generation

After the Node deployment module, the key generation module is developed. Where the number of nodes and number of blocks should be specified, so that the key will be generated. The key is symmetric key and the key is displayed in the text area given in the node.

 Key Pre-distribution Technique:

In this module, we generate blocks of m order initial design, where each block corresponds to a key set. We pre-load then each node with t completely disjoint blocks where t is a protocol parameter that we will discuss later in this section. In lemma 1, we demonstrate the condition of existence of such t completely disjoint blocks among the unital blocks. In the basic approach each node is pre-loaded with only one unital block and we proved that each two nodes share at most one key. Contrary to this, pre-loading each two nodes with t disjoint unital blocks means that each two nodes share between zero and keys since each two unitals blocks share at most one element. After the deployment step, each two neighbors exchange the identifiers of their keys in order to determine the common keys. This approach enhances the network resiliency since the attackers have to compromise more overlap keys to break a secure link. Otherwise, when neighbors do not share any key, they should find a secure path composed of successive secure links.

Secure Transmission with Energy

In this module, the node distance is configured and then the nodes with their neighbor information are displayed. So the nodes which is near by the node, is selected and the energy level is first calculated to verify the secure transmission. After that the data is uploaded and sent to the destination node. Where in the destination node, the key is verified and then the data is received.

SYSTEM CONFIGURATION:-

HARDWARE CONFIGURATION:-

  • Processor             –        Pentium –IV
  • Speed –     1 Ghz
  • RAM –     256 MB(min)
  • Hard Disk –      20 GB
  • Key Board –     Standard Windows Keyboard
  • Mouse –     Two or Three Button Mouse
  • Monitor –     SVGA

SOFTWARE CONFIGURATION:-

  • Operating System : Windows XP
  • Programming Language : NS2
  • Tool : CYGWIN

REFERENCE:

Walid Bechkit, Yacine Challal, Abdelmadjid Bouabdallah, and Vahid Tarokh-“ A Highly Scalable Key Pre-Distribution Scheme for Wireless Sensor Networks”- IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 12, NO. 2, FEBRUARY 2013.

Resource Allocation for QoS Support in Wireless Mesh Networks

Resource Allocation for QoS Support in Wireless Mesh Networks

ABSTRACT:

Many next generation applications (such as video flows) are likely to have associated minimum data rate requirements in order to ensure satisfactory quality as perceived by end-users. In this paper, we develop a framework to address the problem of maximizing the aggregate utility of traffic flows in a multi-hop wireless network, with constraints imposed both due to self-interference and minimum rate requirements. The parameters that are tuned in order to maximize the utility are (i) transmission powers of individual nodes and (ii) the channels assigned to the different communication links. Our framework is based on using across-decomposition technique that takes both inter-flow interference and self-interference into account. The output of our framework is a schedule that dictates what links are to be activated in each slot and the parameters associated with each of those links. If the minimum rate constraint cannot be satisfied for all of the flows, the framework intelligently rejects a sub-set of the flows and recomputes a schedule for the remaining flows. We also design an admission control module that determines if new flows can be admitted without violating the rate requirements of the existing flows in the network. We provide numerical results to demonstrate the efficacy of our framework.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

The problem of resource allocation and congestion control in wired networks has received a lot of attention. In their seminal work, Kelly et al. have modeled the problem of flow control as an optimization problem where the objective is to maximize the aggregate utility of elastic traffic sources subject to capacity constraints on the links that compose the network. Inspired by Kelly’s work, there has been follow up work, where TCP congestion control is modeled a convex optimization problem, the objective being the maximization of an aggregate user utility; in these efforts distributed primaldual solutions to the problem are proposed.

DISADVANTAGES OF EXISTING SYSTEM:

In contrast with wireline networks, the capacity of a wireless link is not dependent on other flows in the network but on other flows that use links on the same channel (and that are close enough) and external interference. The dependencies between flows is regulated by the protocols at both the link and transport layers. However, these prior efforts do not consider the provision of quality-of-service in terms of supporting minimum rates to the flows that share the network. More importantly, the QoS needs to be provided under conditions of self-interference, where the packets of a flow interfere with other packets that belong to the same flow along a multi-hop path.

PROPOSED SYSTEM:

In this paper, we propose a framework for maximizing the aggregate utility of traffic sources while adhering to the capacity constraints of each link and the minimum rate requirements imposed by each of the sources. The framework takes into account the self-interference of flows and assigns (a) channels (b) transmission power levels and (c) time slots to each link such that the above objective is achieved. It dictates the rates at which each traffic source will send packets such that the minimum rate requirements of all coexisting flows are met. If the minimum rate requirements of all the flows cannot be met, the framework rejects a subset of flows (based on fairness considerations) and recomputes the schedule and allocates resources to each of the remaining flows.

ADVANTAGES OF PROPOSED SYSTEM:

  • The framework maximizes the aggregate utility of flows taking into account constraints that arise due to self-interference (wireless channel imposed constraints) and minimum rate requirements of sources (QoS requirements).
  • If a solution is not feasible, the framework selectively drops a few of the sources and redistributes the resources among the others in a way that their QoS requirements are met.
  • The proposed framework readily leads to a simple and effective admission control mechanism.
  • We demonstrate the efficacy of our approach with numerical results. We also theoretically compute performance bounds with our network, as compared with an optimal strategy.

SYSTEM ARCHITECTURE:

MODULES:

  • Creating System Model
  • Channel Assignment
  • Resource allocation
  • Admission control module

MODULES DESCRIPTION:

Creating System Model

We consider a pre-planned WMN consisting of a set of stationary wireless nodes (routers) connected by a set L of unidirectional links. Some of the nodes are assumed to have the ability to perform functions of the gateway, and one of them is selected to act as the gateway to the Internet. Each node is equipped with a single network interface card (NIC) and is associated with one of C orthogonal (non-overlapping) channels for transmitting or receiving. A sender-receiver pair can communicate with each other only if both of them are tuned to the same channel. In this work dynamic channel switching is assumed to be possible with the NIC. Nodes operate in a half-duplex manner so that at any given time a node can either transmit or receive (but not both). In addition, it is assumed that the network operates in a time-slotted mode; time is divided into slots of equal duration.

Channel Assignment

The proposed algorithm allocates channels in a way that (a) self-interference is avoided and (b) co-channel interference levels among links that use the same channel are kept as low as possible. With our algorithm, links with higher costs are assigned higher priorities in terms of channel assignment over the links with lower cost. This is because links with higher costs suffer from higher levels of congestion and thus, scheduling these links is harder. The proposed channel assignment algorithm starts by sorting links in the descending order of their link costs. Then, channels are assigned to the links in that order. The proposed algorithm avoids self-interference by not assigning a channel to any link whose incident links have already been assigned channels. In other words, a link is eligible for activation only if it has no active neighbor links. In order to alleviate the effects of cochannel interference, the channel that is assigned to a link is selected based on the sum of link gains between all the interfering senders using the same channel and the receiver of the link. This sum is calculated for each of the channels and the channel with the least associated value is selected for the link.

Resource Allocation:

The main objective of the module is to allocate resources to the different connections such that the minimum rate requirements of each connection are met. The proposed approach requires both the transport (in terms of end-to-end rate allocation) and the physical layer (in terms of channel and power schedule) to be aligned. Coordination between the two layers can be implemented on different timescales: end-to-end rate allocation (through TCP/AQM) on the fast time-scale and incremental channel and power updates on the slow time-scale. Most of the common TCP/AQM variants can be interpreted as distributed methods for solving the optimization network flow problem (determines the end-to-end rates under fixed link capacity). Based on an initial schedule (a simple TDMA link schedule for the first L slots), we run the TCP/AQM scheme until convergence (this may require the schedule to be applied repeatedly). After rate convergence, each node reports the link prices associated with its incoming and outgoing links to gateway where the proposed resource allocation scheme is adopted. On receiving the link prices from the entire set of node, the gateway finds the channels and transmits powers by applying the resource allocation scheme proposed; it then augments the schedule. The procedure is then repeated with this revised schedule.

Admission control module

An admission control strategy is essential to provide protection to the sources that are currently being serviced. In other words, the QoS of existing flows in terms of a minimum rate (being currently provided) cannot be compromised in order to accommodate new incoming flows. Our resource allocation framework can be easily adapted to support admission control.

SYSTEM CONFIGURATION:-

HARDWARE REQUIREMENTS:-

  • Processor             –        Pentium –IV
  • Speed –     1 Ghz
  • RAM –     256 MB
  • Hard Disk –      20 GB
  • Key Board –     Standard Windows Keyboard
  • Mouse – Two or Three Button Mouse
  • Monitor – SVGA

SOFTWARE REQUIREMENTS: 

  • Operating System : Windows XP
  • Programming Language : NS2
  • Tool : CYGWIN

REFERENCE:

Tae-Suk Kim, Yong Yang, Jennifer C. Hou,Fellow, IEEE,and Srikanth V. Krishnamurthy,Fellow, IEEE “Resource Allocation for QoS Support in Wireless Mesh Networks” – IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2013

On the Node Clone Detection in Wireless Sensor Networks in NS2

On the Node Clone Detection in Wireless Sensor Networks

ABSTRACT:

Wireless sensor networks are vulnerable to the node clone, and several distributed protocols have been proposed to detect this attack. However, they require too strong assumptions to be practical for large-scale, randomly deployed sensor networks. In this paper, we propose two novel node clone detection protocols with different tradeoffs on network conditions and performance. The first one is based on a distributed hash table (DHT), by which a fully decentralized, key-based caching and checking system is constructed to catch cloned nodes effectively. The protocol performance on efficient storage consumption and high security level is theoretically deducted through a probability model, and the resulting equations, with necessary adjustments for real application, are supported by the simulations. Although the DHT-based protocol incurs similar communication cost as previous approaches, it may be considered a little high for some scenarios. To address this concern, our second distributed detection protocol, named randomly directed exploration, presents good communication performance for dense sensor networks, by a probabilistic directed forwarding technique along with random initial direction and border determination. The simulation results uphold the protocol design and show its efficiency on communication overhead and satisfactory detection probability

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

WIRELESS sensor networks (WSNs) have gained a great deal of attention in the past decade due to their wide range of application areas and formidable design challenges. In general, wireless sensor networks consist of hundreds and thousands of low-cost, resource-constrained, distributed sensor nodes, which usually scatter in the surveillance area randomly, working without attendance. If the operation environment is hostile, security mechanisms against adversaries should be taken into consideration. Among many physical attacks to sensor networks, the node clone is a serious and dangerous one. Because of production expense limitation, sensor nodes are generally short of tamper-resistance hardware components; thus, an adversary can capture a few nodes, extract code and all secret credentials, and use those materials to clone many nodes out of off-the-shelf sensor hardware. Those cloned nodes that seem legitimate can freely join the sensor network and then significantly enlarge the adversary’s capacities to manipulate the network maliciously

DISADVANTAGES OF EXISTING SYSTEM:

  • Among many physical attacks to sensor networks, the node clone is a serious and dangerous one.
  • Insufficient storage consumption performance in the existing system and low security level.

PROPOSED SYSTEM:

In this paper, we present two novel, practical node clone detection protocols with different tradeoffs on network conditions and performance.

The first proposal is based on a distributed hash table (DHT) by which a fully decentralized, key-based caching and checking system is constructed to catch cloned nodes. The protocol’s performance on memory consumption and a critical security metric are theoretically deducted through a probability model, and the resulting equations, with necessary adjustment for real application, are supported by the simulations. In accordance with our analysis, the comprehensive simulation results show that the DHT-based protocol can detect node clone with high security level and holds strong resistance against adversary’s attacks.

Our second protocol, named randomly directed exploration, is intended to provide highly efficient communication performance with adequate detection probability for dense sensor networks. In the protocol, initially nodes send claiming messages containing a neighbor-list along with a maximum hop limit to randomly selected neighbors; then, the subsequent message transmission is regulated by a probabilistic directed technique to approximately maintain a line property through the network as well as to incur sufficient randomness for better performance on communication and resilience against adversary. In addition, border determination mechanism is employed to further reduce communication payload. During forwarding, intermediate nodes explore claiming messages for node clone detection. By design, this protocol consumes almost minimal memory, and the simulations show that it outperforms all other detection protocols in terms of communication cost, while the detection probability is satisfactory.

ADVANTAGES OF PROPOSED SYSTEM:

  • The DHT-based protocol can detect node clone with high security level and holds strong resistance against adversary’s attacks.
  • Randomly directed exploration, is intended to provide highly efficient communication performance with adequate detection probability for dense sensor networks.

SYSTEM ARCHITECTURE:

BLOCK DIAGRAM:

Techniques and protocol Used:

  1. Distributed hash table(DHT)
  2. Randomly directed exploration

Distributed hash table (DHT):

Distributed hash table (DHT), by which a fully decentralized, key-based caching and checking system is constructed to catch cloned nodes. The protocol’s performance on memory consumption and a critical security metric are theoretically deducted through a probability model, and the resulting equations, with necessary adjustment for real application, are supported by the simulations. In accordance with our analysis, the comprehensive simulation results show that the DHT-based protocol can detect node clone with high security level and holds strong resistance against adversary’s attacks.

Randomly directed exploration:

This is intended to provide highly efficient communication performance with adequate detection probability for dense sensor networks. In the protocol, initially nodes send claiming messages containing a neighbor-list along with a maximum hop limit to randomly selected neighbors; then, the subsequent message transmission is regulated by a probabilistic directed technique to approximately maintain a line property through the network as well as to incur sufficient randomness for better performance on communication and resilience against adversary. In addition, border determination mechanism is employed to further reduce communication payload. During forwarding, intermediate nodes explore claiming messages for node clone detection. By design, this protocol consumes almost minimal memory, and the simulations show that it outperforms all other detection protocols in terms of communication cost, while the detection probability is satisfactory

MODULES:

  • Setting up Network Model
  • Initialization Process
  • Claiming Neighbor’s information
  • Processing Claiming Message
  • Sink Module
  • Performance Analysis

MODULES DESCRIPTION:

Setting up Network Model

Our first module is setting up the network model. We consider a large-scale, homogeneous sensor network consisting of resource-constrained sensor nodes. Analogous to previous distributed detection approaches; we assume that an identity-based public-key cryptography facility is available in the sensor network. Prior to deployment, each legitimate node is allocated a unique ID and a corresponding private key by a trusted third party. The public key of a node is its ID, which is the essence of an identity-based cryptosystem. Consequently, no node can lie to others about its identity. Moreover, anyone is able to verify messages signed by a node using the identity-based key.The source nodes in our problem formulation serve as storage points which cache the data gathered by other nodes and periodically transmit to the sink, in response to user queries. Such network architecture is consistent with the design of storage centric sensor networks

Initialization Process:

To activate all nodes starting a new round of node clone detection, the initiator uses a broadcast authentication scheme to release an action message including a monotonously increasing nonce, a random round seed, and an action time. The nonce is intended to prevent adversaries from launching a DoS attack by repeating broadcasting action messages.

Claiming neighbor’s information:

Upon receiving an action message, a node verifies if the message nonce is greater than last nonce and if the message signature is valid. If both pass, the node updates the nonce and stores the seed. At the designated action time, the node operates as an observer that generates a claiming message for each neighbor (examinee) and transmits the message through the overlay network with respect to the claiming probability. Nodes can start transmitting claiming messages at the same time, but then huge traffic may cause serious interference and degrade the network capacity. To relieve this problem, we may specify a sending period, during which nodes randomly pick up a transmission time for every claiming message.

 Processing claiming messages:

A claiming message will be forwarded to its destination node via several Chord intermediate nodes. Only those nodes in the overlay network layer (i.e., the source node, Chord intermediate nodes, and the destination node) need to process a message, whereas other nodes along the path simply route the message to temporary targets. Algorithm 1 for handling a message is the kernel of our DHT-based detection protocol. If the algorithm returns NIL, then the message has arrived at its destination. Otherwise, the message will be subsequently forwarded to the next node with the ID that is returned.

Sink Module:

The sink is the point of contact for users of the sensor network. Each time the sink receives a question from a user, it first translates the question into multiple queries and then disseminates the queries to the corresponding mobile relay, which process the queries based on their data and return the query results to the sink. The sink unifies the query results from multiple storage nodes into the final answer and sends it back to the user.

Performance Analysis

For the DHT-based detection protocol, we use the following specific measurements to evaluate its performance:

  • Average number of transmitted messages, representing the protocol’s communication cost;
  • Average size of node cache tables, standing for the protocol’s storage consumption;
  • Average number of witnesses, serving as the protocol’s security level because the detection protocol is deterministic and symmetric.

SYSTEM CONFIGURATION:-

HARDWARE CONFIGURATION:-

ü Processor             –        Pentium –IV

  • Speed –     1 Ghz
  • RAM –     256 MB(min)
  • Hard Disk –      20 GB
  • Key Board –     Standard Windows Keyboard
  • Mouse –     Two or Three Button Mouse
  • Monitor –     SVGA

 

SOFTWARE CONFIGURATION:-

  • Operating System : Windows XP
  • Programming Language : NS2
  • Tool : CYGWIN

REFERENCE:

Zhijun Li, Member, IEEE, and Guang Gong, Senior Member, IEEE “On the Node Clone Detection in Wireless Sensor Networks”- IEEE/ACM TRANSACTIONS ON NETWORKING, 2013.

Detection and Localization of Multiple Spoofing Attackers in Wireless Networks

Detection and Localization of Multiple Spoofing Attackers in Wireless Networks

ABSTRACT:

Wireless spoofing attacks are easy to launch and can significantly impact the performance of networks. Although the identity of a node can be verified through cryptographic authentication, conventional security approaches are not always desirable because of their overhead requirements. In this paper, we propose to use spatial information, a physical property associated with each node, hard to falsify, and not reliant on cryptography, as the basis for 1) detecting spoofing attacks; 2) determining the number of attackers when multiple adversaries masquerading as the same node identity; and 3) localizing multiple adversaries. We propose to use the spatial correlation of received signal strength (RSS) inherited from wireless nodes to detect the spoofing attacks. We then formulate the problem of determining the number of attackers as a multiclass detection problem. Cluster-based mechanisms are developed to determine the number of attackers. When the training data are available, we explore using the Support Vector Machines (SVM) method to further improve the accuracy of determining the number of attackers. In addition, we developed an integrated detection and localization system that can localize the positions of multiple attackers. We evaluated our techniques through two test beds using both an 802.11 (WiFi) network and an 802.15.4 (ZigBee) network in two real office buildings. Our experimental results show that our proposed methods can achieve over 90 percent Hit Rate and Precision when determining the number of attackers. Our localization results using a representative set of algorithms provide strong evidence of high accuracy of localizing multiple adversaries.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

In spite of existing 802.11 security techniques including Wired Equivalent Privacy (WEP), WiFi Protected Access (WPA), or 802.11i (WPA2), such methodology can only protect data frames—an attacker can still spoof management or control frames to cause significant impact on networks. Spoofing attacks can further facilitate a variety of traffic injection attacks, such as attacks on access control lists, rogue access point (AP) attacks, and eventually Denial of-Service (DoS) attacks. A broad survey of possible spoofing attacks can be found. Moreover, in a large-scale network, multiple adversaries may masquerade as the same identity and collaborate to launch malicious attacks such as network resource utilization attack and denial-of-service attack quickly. Therefore, it is important to 1) detect the presence of spoofing attacks, 2) determine the number of attackers, and 3) localize multiple adversaries and eliminate them. Most existing approaches to address potential spoofing attacks employ cryptographic schemes. However, the application of cryptographic schemes requires reliable key distribution, management, and maintenance mechanisms. It is not always desirable to apply these cryptographic methods because of its infrastructural, computational, and management overhead. Further, cryptographic methods are susceptible to node compromise, which is a serious concern as most wireless nodes are easily accessible, allowing their memory to be easily scanned.

DISADVANTAGES OF EXISTING SYSTEM:

  • Among various types of attacks, identity-based spoofing attacks are especially easy to launch and can cause significant damage to network performance.
  • For instance, in an 802.11 network, it is easy for an attacker to gather useful MAC address information during passive monitoring and then modify its MAC address by simply issuing anifconfig command to masquerade as another device.
  • Not self defensive
  • Effective only when implemented by large number of networks
  • Deployment is costly
  • Incentive for an ISP is very low

PROPOSED SYSTEM:

In this work, we propose to use received signal strength (RSS)-based spatial correlation, a physical property associated with each wireless node that is hard to falsify and not reliant on cryptography as the basis for detecting spoofing attacks. Since we are concerned with attackers who have different locations than legitimate wireless nodes, utilizing spatial information to address spoofing attacks has the unique power to not only identify the presence of these attacks but also localize adversaries. An added advantage of employing spatial correlation to detect spoofing attacks is that it will not require any additional cost or modification to the wireless devices themselves. We focus on static nodes in this work, which are common for spoofing scenarios. We addressed spoofing detection in mobile environments in our other work. Faria and Cheriton proposed the use of matching rules of signal prints for spoofing detection, Sheng et al. modeled the RSS readings using a Gaussian mixture model and Chen et al. used RSS and K-means cluster analysis to detect spoofing attacks. However, none of these approaches have the ability to determine the number of attackers when multiple adversaries use the same identity to launch attacks, which is the basis to further localize multiple adversaries after attack detection. Although Chen et al. studied how to localize adversaries, it can only handle the case of a single spoofing attacker and cannot localize the attacker if the adversary uses different transmission power levels.

  • The proposed System used Inter domain Packet filters (IDPFs) architecture, a system that can be constructed solely based on the locally exchanged BGP updates.
  • Each node only selects and propagates to neighbors based on two set of routing policies. They are Import and Export Routing policies.
  • The IDPFs uses a feasible path from source node to the destination node, and a packet can reach to the destination through one of its upstream neighbors.
  • The training data is available, we explore using Support Vector Machines (SVM) method to further improve the accuracy of determining the number of attackers.
  • In localization results using a representative set of algorithms provide strong evidence of high accuracy of localizing multiple adversaries.
  • The Cluster Based wireless Sensor Network data received signal strength (RSS) based spatial correlation of network Strategy.
  • A physical property associated with each wireless device that is hard to falsify and not reliant on cryptography as the basis for detecting spoofing attacks in wireless networks.

ADVANTAGES OF PROPOSED SYSTEM:

  • GADE: a generalized attack detection model (GADE) that can both detect spoofing attacks as well as determine the number of adversaries using cluster analysis methods grounded on RSS-based spatial correlations among normal devices and adversaries
  • IDOL: an integrated detection and localization system that can both detect attacks as well as find the positions of multiple adversaries even when the adversaries vary their transmission power levels.
  • Damage Reduction under SPM Defense is high
  • Client Traffic
  • Comparing to other methods the benefits of SPM are more.
  • SPM is generic because their only goal is to filter spoofed packets.

 MODULES:

  • Network configuration
  • Generalized attack detection model
  • Integrated detection and localization framework
  • Performance evaluation

 MODULES DESCRIPTON:

Network Configuration

The nodes are created and located in the simulation environment. The nodes are moved from one location to another location. The setdest command is used to give the movement to a node. The Random way point mobility model is used in our simulation. The nodes are using Omni-antenna to send and receive the data. The signals are propagated from one location to another location by using Two Ray Ground propagation model. The Priority Queue is maintained between any of the two nodes as the interface Queue.

 Generalized Attack Detection Model

          The Generalized Attack Detection Model consists of two phases: attack detection, which detects the presence of an attack, and number determination, which determines the number of adversaries.

          The challenge in spoofing detection is to devise strategies that use the uniqueness of spatial information, but not using location directly as the attackers’ positions are unknown. RSS property is closely correlated with location in physical space and is readily available in the existing wireless networks. Although affected by random noise, environmental bias, and multipath effects, the RSS measured at a set of landmarks is closely related to the transmitter’s physical location and is governed by the distance to the landmarks. The RSS readings at the same physical location are similar, whereas the RSS readings at different locations in physical space are distinctive. Thus, the RSS readings present strong spatial correlation characteristics.

 Integrated Detection and Localization Framework

          In this module, an integrated system that can both detect spoofing attacks, determine the number of attackers, and localize multiple adversaries.

          The traditional localization approaches are based on averaged RSS from each node identity inputs to estimate the position of a node. However, in wireless spoofing attacks, the RSS stream of a node identity may be mixed with RSS readings of both the original node as well as spoofing nodes from different physical locations. The traditional method of averaging RSS readings cannot differentiate RSS readings from different locations and thus is not feasible for localizing adversaries.

          Different from traditional localization approaches, our integrated detection and localization system utilize the RSS medoids returned from SILENCE as inputs to localization algorithms to estimate the positions of adversaries. The return positions from our system include the location estimate of the original node and the attackers in the physical space. Handling adversaries using different transmission power levels. An adversary may vary the transmission power levels when performing spoofing attacks so that the localization system cannot estimate its location accurately.

Performance Evaluation

The performance of the proposed scheme is evaluated by plotting the graph. The parameter used to evaluate the performance is as follows:

  • False positive Rate
  • Spoofing Detection rate
  • Throughput

 These parameter values are recorded in the trace file during the simulation by using record procedure. The recorded details are stored in the trace file. The trace file is executed by using the Xgraph to get graph as the output.

SYSTEM CONFIGURATION:-

HARDWARE REQUIREMENTS:-

ü Processor             -Pentium –III

  • Speed – 1 Ghz
  • RAM – 256 MB(min)
  • Hard Disk – 20 GB
  • Floppy Drive – 44 MB
  • Key Board – Standard Windows Keyboard
  • Mouse – Two or Three Button Mouse
  • Monitor – SVGA

 

SOFTWARE REQUIREMENTS:-

  • Operating System : WINDOWS XP
  • Front End : NS2
  • TOOL : CYGWIN

REFERENCE:

Jie Yang,Student Member, IEEE, Yingying (Jennifer) Chen, Senior Member, IEEE, Wade Trappe,Member, IEEE, and Jerry Cheng “Detection and Localization of Multiple Spoofing Attackers in Wireless Networks”- IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 24, NO. 1, JANUARY 2013.

Vampire Attacks: Draining Life from Wireless Ad Hoc Sensor Networks

Vampire Attacks: Draining Life from Wireless Ad Hoc Sensor Networks

ABSTRACT:

Ad hoc low-power wireless networks are an exciting research direction in sensing and pervasive computing. Prior security work in this area has focused primarily on denial of communication at the routing or medium access control levels. This paper explores resource depletion attacks at the routing protocol layer, which permanently disable networks by quickly draining nodes’ battery power. These “Vampire” attacks are not specific to any specific protocol, but rather rely on the properties of many popular classes of routing protocols. We find that all examined protocols are susceptible to Vampire attacks, which are devastating, difficult to detect, and are easy to carry out using as few as one malicious insider sending only protocol-compliant messages. In the worst case, a single Vampire can increase network-wide energy usage by a factor of O (N), where N in the number of network nodes. We discuss methods to mitigate these types of attacks, including a new proof-of-concept protocol that provably bounds the damage caused by Vampires during the packet forwarding phase.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

Existing work on secure routing attempts to ensure that adversaries cannot cause path discovery to return an invalid network path, but Vampires do not disrupt or alter discovered paths, instead using existing valid network paths and protocol compliant messages. Protocols that maximize power efficiency are also inappropriate, since they rely on cooperative node behavior and cannot optimize out malicious action.

DISADVANTAGES OF EXISTING SYSTEM:

  • Power outages
  • Due to Environmental disasters, loss in the information
  • Lost productivity
  • Various DOS attacks
  • Secure level is low
  • They do not address attacks that affect long-term availability.

PROPOSED SYSTEM:

This paper makes three primary contributions. First, we thoroughly evaluate the vulnerabilities of existing protocols to routing layer battery depletion attacks. We observe that security measures to prevent Vampire attacks are orthogonal to those used to protect routing infrastructure, and so existing secure routing protocols such as Ariadne, SAODV and SEAD do not protect against Vampire attacks. Existing work on secure routing attempts to ensure that adversaries cannot cause path discovery to return an invalid network path, but Vampires do not disrupt or alter discovered paths, instead using existing valid network paths and protocol-compliant messages. Protocols that maximize power efficiency are also inappropriate, since they rely on cooperative node behavior and cannot optimize out malicious action. Second, we show simulation results quantifying the performance of several representative protocols in the presence of a single Vampire (insider adversary). Third, we modify an existing sensor network routing protocol to provably bound the damage from Vampire attacks during packet forwarding.

 In proposed system we show simulation results quantifying the performance of several representative protocols in the presence of a single Vampire. Then, we modify an existing sensor network routing protocol to provably bound the damage from Vampire attacks during packet forwarding.

ADVANTAGES OF PROPOSED SYSTEM:

  • Protect from the vampire attacks
  • Secure level is high
  • Boost up the Battery power

SYSTEM ARCHITECTURE:

PROBLEMS IDENTIFIED AND CONFIRMED

                  If vampire attack exist in the network, it will affect one node and drain its full energy and the particular node will goes to dead state and then the vampire attack concentrates on next node and so on it affects all nodes in the network, as a result all nodes goes to dead state.

                  The vampire attack permanently disables or destroys the network.

OBJECTIVE AND SCOPE OF THE PROJECT

                  Our proposed project concentrates on securing the network from the malicious attack. Our implementation results in the efficient detection and elimination of vampire attack from the network. In order to detect and eliminating the vampire attack we going to implement certain intrusion detection system based on the energy level constraints.

                    Our simulation result shows the improved network authentication rate and efficient detection of malicious node from the network, so that our proposed system forms a secure network with high throughput rate.

ASSUMPTIONS, CONSTRAINTS AND LIMITATIONS

           in order to show the performance metrics we locate 30 to 50 sensor nodes in the network, let number of sensor nodes be N.

           Then the routing is performed between sensor nodes, let the data packets be 512 bytes and the initial energy level of nodes be 10 joules.

           Let us use the wireless channel type for data routing among the N number of nodes.

          The routing is dine through link layer if link state routing protocol like aodv dsr dsdv

The graphical constraints like throughput, packet delivery ratio, delay are used to evaluate the performance of network

Mac 802.11 and Omni antenna is used for data communication and covering the transmission range.

 PROPOSED METHOD

                                      The proposed system concentrates on a secure data transmission from the adversary nodes in the sensor network. In order to build a secure network, the network should be an extinct to adversary nodes. So we propose a technique called nodes position verification  and node verification intrusion detection techniques [IDS]. The nodes which has the exceed threshold value other than normal nodes, then a node supposed to be a malicious nodes  which will undergoes a vampire attack. By the proposed IDS system we can calculate the threshold value and energy level of malicious nodes, and also by NPA techniques the malicious nodes can be detected efficiently and detected nodes are eliminated from the network which increases the network performance ant throughput rate.

MODULES:

  • Network Configuration Setting
  • Data Routing
  • Vampire Attack
  • Backtracking Technique
  • Intrusion Detection System
  • Malicious Node Elimination
  • Graph Evaluation

MODULES DESCRIPTION:

NODE CONFIGURATION SETTING

           The mobile nodes are designed and configured dynamically, designed to employ across the network, the nodes are set according to the X, Y, Z dimension, which the nodes have the direct transmission range to all other nodes.

DATA ROUTING

              The source and destination are set at larger distance, the source transmits the data packets to destination through the intermediate hop nodes using UDP user data gram protocol, link state routing like PLGP act as an ad hoc routing protocol.

VAMPIRE ATTACK

             The malicious node enters the network, and affects the one of the intermediate node by sending false packets. So the malicious node drain the energy of the intermediate node, the intermediate energy level goes to 0 joules. So the data transmission is affected, the path tends to be failure between source and destination. As a result source retransmits the data in another path to destination. If the vampire attack continues it will disable the whole network.

BACKTRACKING TECHNIQUE

               The back tracking technique is used to identify legitimate nodes in the particular path; the nodes accept the data only after the execution of back tracking technique. If source transmits the data to next neighbor node, the next node verifies the source identity using back tracking process. Through this technique the data is transmitted securely in the presence of vampire nodes.

INTRUSION DETECTION SYSTEM

                  The energy constraint IDS is used to detect the malicious nodes from the network, for that purpose the energy level for all nodes are calculated after every data iteration process. Maximum nodes have an average energy level in certain range, due to the nature of vampire nodes have a abnormal energy level like malicious node energy level is three times more than the average energy level, by this technique the malicious nodes can be identified easily.

MALICIOUS NODE ELIMINATION 

                 After the IDS process the malicious nodes detected. The TA trusted authority informs to all nodes in the network and eliminate the malicious node from the network. So by eliminating malicious node we can form a secure network

GRAPH EXAMINATION

               The performance analysis of the existing and proposed work is examined through graphical analysis.

SYSTEM CONFIGURATION:-

HARDWARE CONFIGURATION:-

ü Processor             –        Pentium –IV

  • Speed –     1 Ghz
  • RAM –     256 MB(min)
  • Hard Disk –      20 GB
  • Key Board –     Standard Windows Keyboard
  • Mouse –     Two or Three Button Mouse
  • Monitor –     SVGA

SOFTWARE CONFIGURATION:-

  • Operating System : Windows XP/LINUX
  • Simulator : NS2
  • Tool : Cygwin

REFERENCE:

Eugene Y. Vasserman and Nicholas Hopper “Vampire Attacks: Draining Life from Wireless Ad Hoc Sensor Networks”- IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 12, NO. 2, FEBRUARY 2013.

A Distributed Control Law for Load Balancing in Content Delivery Networks

A Distributed Control Law for Load Balancing in Content Delivery Networks

ABSTRACT:

In this paper, we face the challenging issue of defining and implementing an effective law for load balancing in Content Delivery Networks (CDNs). We base our proposal on a formal study of a CDN system, carried out through the exploitation of a fluid flow model characterization of the network of servers. Starting from such characterization, we derive and prove a lemma about the network queues equilibrium. This result is then leveraged in order to devise a novel distributed and time-continuous algorithm for load balancing, which is also reformulated in a time-discrete version. The discrete formulation of the proposed balancing law is eventually discussed in terms of its actual implementation in a real-world scenario. Finally, the overall approach is validated by means of simulations.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

In a queue-adjustment strategy, the scheduler is located after the queue and just before the server. The scheduler might assign the request pulled out from the queue to either the local server or a remote server depending on the status of the system queues.

In a rate-adjustment model, instead the scheduler is located just before the local queue: Upon arrival of a new request, the scheduler decides whether to assign it to the local queue or send it to a remote server.

In a hybrid-adjustment strategy for load balancing, the scheduler is allowed to control both the incoming request rate at a node and the local queue length.

Thus in Existing systems, Upon arrival of a new request, indeed, a CDN server can either elaborate locally the request or redirect it to other servers according to a certain decision rule, which is based on the state information exchanged by the servers. Such an approach limits state exchanging overhead to just local servers.

DISADVANTAGES OF EXISTINGS SYSTEM:

A critical component of CDN architecture is the request routing mechanism. It allows to direct users’ requests for content to the appropriate server based on a specified set of parameters. The proximity principle, by means of which a request is always served by the server that is closest to the client, can sometimes fail. Indeed, the routing process associated with a request might take into account several parameters (like traffic load, bandwidth, and servers’ computational capabilities) in order to provide the best performance in terms of time of service, delay, etc. Furthermore, an effective request routing mechanism should be able to face temporary, and potentially localized, high request rates (the so-called flash crowds) in order to avoid affecting the quality of service perceived by other users.

PROPOSED SYSTEM:

In a similar way, in this paper we first design a suitable load-balancing law that assures equilibrium of the queues in a balanced CDN by using a fluid flow model for the network of servers. Then, we discuss the most notable implementation issues associated with the proposed load-balancing strategy.

We present a new mechanism for redirecting incoming client requests to the most appropriate server, thus balancing the overall system requests load. Our mechanism leverages local balancing in order to achieve global balancing. This is carried out through a periodic interaction among the system nodes.

ADVANTAGES OF PROPOSED SYSTEM:

The quality of our solution can be further appreciated by analyzing the performance parameters

The proposed mechanism also exhibits an excellent average Response Time, which is only comparable to the value obtained by the 2RC algorithm.

The excellent performance of our mechanism might be paid in terms of a significant number of redirections. Since the redirection process is common to all the algorithms analyzed, we exclusively evaluate the percentage of requests redirected more than once over the total number of requests generated.

ALGORITHM USED:

Distributed Load-Balancing Algorithm

SYSTEM ARCHITECTURE:

MODULES:

  • Client Request
  • Server
  • Creating Load
  • Fluid Queue Model
  • Load balance

MODULES DESCRIPTION:

Client Request

In this module, we design the system, such that client makes request to server.

 Server

In this module we design the Server System, where the server processes the client request.

Creating Load

In this module, we create the load to the server.

Fluid Queue Model

In this paper we first design a suitable load-balancing law that assures equilibrium of the queues in a balanced CDN by using a fluid flow model for the network of servers. In a queue-adjustment strategy, the scheduler is located after the queue and just before the server. The scheduler might assign the request pulled out from the queue to either the local server or a remote server depending on the status of the system queues: If an unbalancing exists in the network with respect to the local server, it might assign part of the queued requests to the most unloaded remote server. In this way, the algorithm tries to equally balance the requests in the system queues. It is clear that in order to achieve an effective load balancing, the scheduler needs to

periodically retrieve information about remote queue lengths.

Load balance

We present a new mechanism for redirecting incoming client requests to the most appropriate server, thus balancing the overall system requests load. Our mechanism leverages local balancing in order to achieve global balancing. This is carried out through a periodic interaction among the system nodes.

SYSTEM CONFIGURATION:-

HARDWARE REQUIREMENTS:-

ü Processor                  –        Pentium –IV

  • Speed – 1 Ghz
  • RAM – 512 MB(min)
  • Hard Disk – 40 GB
  • Key Board – Standard Windows Keyboard
  • Mouse – Two or Three Button Mouse
  • Monitor –        LCD/LED

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows XP.
  • Coding Language : NS2
  • Tool :         CYGWIN

REFERENCE:

Sabato Manfredi, Francesco Oliviero, and Simon Pietro Romano,A Distributed Control Law for Load Balancing in Content Delivery Networks”, IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 21, NO. 1, FEBRUARY 2013.