Hardware implementation of Elliptic Curve Digital Signature Algorithm (ECDSA) on Koblitz Curves

Hardware implementation of Elliptic Curve Digital Signature Algorithm (ECDSA) on Koblitz Curves

ABSTRACT:

This paper presents Elliptic Curve Digital Signature Algorithm (ECDSA) hardware implementation over Koblitz subfield curves with 163-bit key length. We designed ECDSA with the purpose to improve performance and security respectively by using elliptic curve point multiplication on Koblitz curves to compute the public key and a key stream generator “W7” to generate private key. Different blocs of ECDSA are implemented on a reconfigurable hardware platform (Xilinx xc6vlx760-2ff1760). We used the hardware description language VHDL (VHSIC Hardware Description Language) for compartmental validation. The design requires 0.2 ms, 0.8 ms and 0.4 ms with 7 %, 13 % and 5 % of the device resources on Slice LUT for respectively key generation, signature generation and signature verification. The proposed ECDSA implementation is suitable to the applications that need: low-bandwidth communication, low-storage and low-computation environments. In particular our implementation is suitable to smart cards and wireless devices.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram : 1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows XP/UBUNTU.
  • Implementation : NS2
  • NS2 Version : 2.28
  • Front End : OTCL (Object Oriented Tool Command  Language)
  • Tool : Cygwin (To simulate in Windows OS)

REFERENCE:

Ghanmy Nabil, Khlif Naziha, Fourati Lamia, Kamoun Lotfi, “Hardware implementation of Elliptic Curve Digital Signature Algorithm (ECDSA) on Koblitz Curves”, IEEE 2013.

ProHet: A Probabilistic Routing Protocol with Assured Delivery Rate in Wireless Heterogeneous Sensor Networks

ProHet: A Probabilistic Routing Protocol with Assured Delivery Rate in Wireless Heterogeneous Sensor Networks

ABSTRACT:

Due to different requirements in applications, sensors with different capacities are deployed. How to design efficient, reliable and scalable routing protocols in such wireless heterogeneous sensor networks (WHSNs) with intermittent asymmetric links is a challenging task. In this paper, we propose ProHet: a distributed probabilistic routing protocol for WHSNs that utilizes asymmetric links to reach assured delivery rate with low overhead. The ProHet protocol first produces a bidirectional routing abstraction by finding a reverse path for every asymmetric link. Then, it uses a probabilistic strategy to choose forwarding nodes based on historical statistics using local information. Analysis shows that ProHet can achieve assured delivery rate ρ if ρ is set within its upper-bound. Extensive simulations are conducted to verify its efficiency.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram : 1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows XP/UBUNTU.
  • Implementation : NS2
  • NS2 Version : 2.28
  • Front End : OTCL (Object Oriented Tool Command  Language)
  • Tool : Cygwin (To simulate in Windows OS)

REFERENCE:

Xiao Chen, Zanxun Dai, Wenzhong Li, Yuefei Hu, Jie Wu, Hongchi Shi, and Sanglu Lu, “ProHet: A Probabilistic Routing Protocol with Assured Delivery Rate in Wireless Heterogeneous Sensor Networks”, IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 12, NO. 4, APRIL 2013.

A Highly Scalable Key Pre-Distribution Scheme for Wireless Sensor Networks

A Highly Scalable Key Pre-Distribution Scheme for Wireless Sensor Networks

ABSTRACT:

Given the sensitivity of the potential WSN applications and because of resource limitations, key management emerges as a challenging issue for WSNs. One of the main concerns when designing a key management scheme is the network scalability. Indeed, the protocol should support a large number of nodes to enable a large scale deployment of the network. In this paper, we propose a new scalable key management scheme for WSNs which provides a good secure connectivity coverage. For this purpose, we make use of the unital design theory. We show that the basic mapping from unitals to key pre-distribution allows us to achieve high network scalability. Nonetheless, this naive mapping does not guarantee a high key sharing probability. Therefore, we propose an enhanced unital-based key pre-distribution scheme providing high network scalability and good key sharing probability approximately lower bounded by 1 e  0.632. We conduct approximate analysis and simulations and compare our solution to those of existing methods for different criteria such as storage overhead, network scalability, network connectivity, average secure path length and network resiliency. Our results show that the proposed approach enhances the network scalability while providing high secure connectivity coverage and overall improved performance. Moreover, for an equal network size, our solution reduces significantly the storage overhead compared to those of existing solutions.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

Wireless sensor networks (WSNs) are increasingly used in critical applications within several fields including military, medical and industrial sectors. Given the sensitivity of these applications, sophisticated security services are required. Key management is a corner stone for many security services such as confidentiality and authentication which are required to secure communications in WSNs. The establishment of secure links between nodes is then a challenging problem in WSNs. Because of resource limitations, symmetric key establishment is one of the most suitable paradigms for securing exchanges in WSNs. On the other hand, because of the lack of infrastructure in WSNs, we have usually no trusted third party which can attribute pair wise secret keys to neighboring nodes, that is why most existing solutions are based on key pre-distribution.

DISADVANTAGES OF EXISTING SYSTEM:

A host of research work dealt with symmetric key pre-distribution issue for WSNs and many solutions have been proposed In the existing system many disadvantages occur: the design of key rings (blocks of keys) is strongly related to the network size, these solutions either suffer from low scalability (number of supported nodes), or degrade other performance metrics including secure connectivity, storage overhead and resiliency in the case of large networks.

PROPOSED SYSTEM:

In this proposed system, our aim is to tackle the scalability issue without degrading the other network performance metrics. For this purpose, we target the design of a scheme which ensures a good secure coverage of large scale networks with a low key storage overhead and a good network resiliency. To this end, we make use, of the unital design theory for efficient WSN key pre-distribution.

ADVANTAGES OF PROPOSED SYSTEM:

The advantages of the proposed system as follows:

  • We propose a naive mapping from unital design to key pre-distribution and we show through analytical analysis that it allows to achieve high scalability.
  • We propose an enhanced unitalbased key pre-distribution scheme that maintains a good key sharing probability while enhancing the network scalability.
  • We analyze and compare our new approach against main existing schemes, with respect to different criteria: storage overhead, energy consumption, network scalability, secure connectivity coverage, average secure path length and network resiliency.

SYSTEM ARCHITECTURE:

BLOCK DIAGRAM:


MODULES:

  1. Node Deployment
  2. Key Generation
  3. Key Pre-distribution Technique
  4. Secure Transmission with Energy

MODULES DESCRIPTION:

Node Deployment

The first module is Node deployment, where the node can be deployed by specifying the number of nodes in the network. After specifying the number of nodes in the network, the nodes are deployed. The nodes are deployed with unique ID (Identity) number so that each can be differentiated. And also nodes are deployed with their energy levels.

Key Generation

After the Node deployment module, the key generation module is developed. Where the number of nodes and number of blocks should be specified, so that the key will be generated. The key is symmetric key and the key is displayed in the text area given in the node.

 Key Pre-distribution Technique:

In this module, we generate blocks of m order initial design, where each block corresponds to a key set. We pre-load then each node with t completely disjoint blocks where t is a protocol parameter that we will discuss later in this section. In lemma 1, we demonstrate the condition of existence of such t completely disjoint blocks among the unital blocks. In the basic approach each node is pre-loaded with only one unital block and we proved that each two nodes share at most one key. Contrary to this, pre-loading each two nodes with t disjoint unital blocks means that each two nodes share between zero and keys since each two unitals blocks share at most one element. After the deployment step, each two neighbors exchange the identifiers of their keys in order to determine the common keys. This approach enhances the network resiliency since the attackers have to compromise more overlap keys to break a secure link. Otherwise, when neighbors do not share any key, they should find a secure path composed of successive secure links.

Secure Transmission with Energy

In this module, the node distance is configured and then the nodes with their neighbor information are displayed. So the nodes which is near by the node, is selected and the energy level is first calculated to verify the secure transmission. After that the data is uploaded and sent to the destination node. Where in the destination node, the key is verified and then the data is received.

SYSTEM CONFIGURATION:-

HARDWARE CONFIGURATION:-

  • Processor             –        Pentium –IV
  • Speed –     1 Ghz
  • RAM –     256 MB(min)
  • Hard Disk –      20 GB
  • Key Board –     Standard Windows Keyboard
  • Mouse –     Two or Three Button Mouse
  • Monitor –     SVGA

SOFTWARE CONFIGURATION:-

  • Operating System : Windows XP
  • Programming Language : NS2
  • Tool : CYGWIN

REFERENCE:

Walid Bechkit, Yacine Challal, Abdelmadjid Bouabdallah, and Vahid Tarokh-“ A Highly Scalable Key Pre-Distribution Scheme for Wireless Sensor Networks”- IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 12, NO. 2, FEBRUARY 2013.

Resource Allocation for QoS Support in Wireless Mesh Networks

Resource Allocation for QoS Support in Wireless Mesh Networks

ABSTRACT:

Many next generation applications (such as video flows) are likely to have associated minimum data rate requirements in order to ensure satisfactory quality as perceived by end-users. In this paper, we develop a framework to address the problem of maximizing the aggregate utility of traffic flows in a multi-hop wireless network, with constraints imposed both due to self-interference and minimum rate requirements. The parameters that are tuned in order to maximize the utility are (i) transmission powers of individual nodes and (ii) the channels assigned to the different communication links. Our framework is based on using across-decomposition technique that takes both inter-flow interference and self-interference into account. The output of our framework is a schedule that dictates what links are to be activated in each slot and the parameters associated with each of those links. If the minimum rate constraint cannot be satisfied for all of the flows, the framework intelligently rejects a sub-set of the flows and recomputes a schedule for the remaining flows. We also design an admission control module that determines if new flows can be admitted without violating the rate requirements of the existing flows in the network. We provide numerical results to demonstrate the efficacy of our framework.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

The problem of resource allocation and congestion control in wired networks has received a lot of attention. In their seminal work, Kelly et al. have modeled the problem of flow control as an optimization problem where the objective is to maximize the aggregate utility of elastic traffic sources subject to capacity constraints on the links that compose the network. Inspired by Kelly’s work, there has been follow up work, where TCP congestion control is modeled a convex optimization problem, the objective being the maximization of an aggregate user utility; in these efforts distributed primaldual solutions to the problem are proposed.

DISADVANTAGES OF EXISTING SYSTEM:

In contrast with wireline networks, the capacity of a wireless link is not dependent on other flows in the network but on other flows that use links on the same channel (and that are close enough) and external interference. The dependencies between flows is regulated by the protocols at both the link and transport layers. However, these prior efforts do not consider the provision of quality-of-service in terms of supporting minimum rates to the flows that share the network. More importantly, the QoS needs to be provided under conditions of self-interference, where the packets of a flow interfere with other packets that belong to the same flow along a multi-hop path.

PROPOSED SYSTEM:

In this paper, we propose a framework for maximizing the aggregate utility of traffic sources while adhering to the capacity constraints of each link and the minimum rate requirements imposed by each of the sources. The framework takes into account the self-interference of flows and assigns (a) channels (b) transmission power levels and (c) time slots to each link such that the above objective is achieved. It dictates the rates at which each traffic source will send packets such that the minimum rate requirements of all coexisting flows are met. If the minimum rate requirements of all the flows cannot be met, the framework rejects a subset of flows (based on fairness considerations) and recomputes the schedule and allocates resources to each of the remaining flows.

ADVANTAGES OF PROPOSED SYSTEM:

  • The framework maximizes the aggregate utility of flows taking into account constraints that arise due to self-interference (wireless channel imposed constraints) and minimum rate requirements of sources (QoS requirements).
  • If a solution is not feasible, the framework selectively drops a few of the sources and redistributes the resources among the others in a way that their QoS requirements are met.
  • The proposed framework readily leads to a simple and effective admission control mechanism.
  • We demonstrate the efficacy of our approach with numerical results. We also theoretically compute performance bounds with our network, as compared with an optimal strategy.

SYSTEM ARCHITECTURE:

MODULES:

  • Creating System Model
  • Channel Assignment
  • Resource allocation
  • Admission control module

MODULES DESCRIPTION:

Creating System Model

We consider a pre-planned WMN consisting of a set of stationary wireless nodes (routers) connected by a set L of unidirectional links. Some of the nodes are assumed to have the ability to perform functions of the gateway, and one of them is selected to act as the gateway to the Internet. Each node is equipped with a single network interface card (NIC) and is associated with one of C orthogonal (non-overlapping) channels for transmitting or receiving. A sender-receiver pair can communicate with each other only if both of them are tuned to the same channel. In this work dynamic channel switching is assumed to be possible with the NIC. Nodes operate in a half-duplex manner so that at any given time a node can either transmit or receive (but not both). In addition, it is assumed that the network operates in a time-slotted mode; time is divided into slots of equal duration.

Channel Assignment

The proposed algorithm allocates channels in a way that (a) self-interference is avoided and (b) co-channel interference levels among links that use the same channel are kept as low as possible. With our algorithm, links with higher costs are assigned higher priorities in terms of channel assignment over the links with lower cost. This is because links with higher costs suffer from higher levels of congestion and thus, scheduling these links is harder. The proposed channel assignment algorithm starts by sorting links in the descending order of their link costs. Then, channels are assigned to the links in that order. The proposed algorithm avoids self-interference by not assigning a channel to any link whose incident links have already been assigned channels. In other words, a link is eligible for activation only if it has no active neighbor links. In order to alleviate the effects of cochannel interference, the channel that is assigned to a link is selected based on the sum of link gains between all the interfering senders using the same channel and the receiver of the link. This sum is calculated for each of the channels and the channel with the least associated value is selected for the link.

Resource Allocation:

The main objective of the module is to allocate resources to the different connections such that the minimum rate requirements of each connection are met. The proposed approach requires both the transport (in terms of end-to-end rate allocation) and the physical layer (in terms of channel and power schedule) to be aligned. Coordination between the two layers can be implemented on different timescales: end-to-end rate allocation (through TCP/AQM) on the fast time-scale and incremental channel and power updates on the slow time-scale. Most of the common TCP/AQM variants can be interpreted as distributed methods for solving the optimization network flow problem (determines the end-to-end rates under fixed link capacity). Based on an initial schedule (a simple TDMA link schedule for the first L slots), we run the TCP/AQM scheme until convergence (this may require the schedule to be applied repeatedly). After rate convergence, each node reports the link prices associated with its incoming and outgoing links to gateway where the proposed resource allocation scheme is adopted. On receiving the link prices from the entire set of node, the gateway finds the channels and transmits powers by applying the resource allocation scheme proposed; it then augments the schedule. The procedure is then repeated with this revised schedule.

Admission control module

An admission control strategy is essential to provide protection to the sources that are currently being serviced. In other words, the QoS of existing flows in terms of a minimum rate (being currently provided) cannot be compromised in order to accommodate new incoming flows. Our resource allocation framework can be easily adapted to support admission control.

SYSTEM CONFIGURATION:-

HARDWARE REQUIREMENTS:-

  • Processor             –        Pentium –IV
  • Speed –     1 Ghz
  • RAM –     256 MB
  • Hard Disk –      20 GB
  • Key Board –     Standard Windows Keyboard
  • Mouse – Two or Three Button Mouse
  • Monitor – SVGA

SOFTWARE REQUIREMENTS: 

  • Operating System : Windows XP
  • Programming Language : NS2
  • Tool : CYGWIN

REFERENCE:

Tae-Suk Kim, Yong Yang, Jennifer C. Hou,Fellow, IEEE,and Srikanth V. Krishnamurthy,Fellow, IEEE “Resource Allocation for QoS Support in Wireless Mesh Networks” – IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2013

On the Node Clone Detection in Wireless Sensor Networks in NS2

On the Node Clone Detection in Wireless Sensor Networks

ABSTRACT:

Wireless sensor networks are vulnerable to the node clone, and several distributed protocols have been proposed to detect this attack. However, they require too strong assumptions to be practical for large-scale, randomly deployed sensor networks. In this paper, we propose two novel node clone detection protocols with different tradeoffs on network conditions and performance. The first one is based on a distributed hash table (DHT), by which a fully decentralized, key-based caching and checking system is constructed to catch cloned nodes effectively. The protocol performance on efficient storage consumption and high security level is theoretically deducted through a probability model, and the resulting equations, with necessary adjustments for real application, are supported by the simulations. Although the DHT-based protocol incurs similar communication cost as previous approaches, it may be considered a little high for some scenarios. To address this concern, our second distributed detection protocol, named randomly directed exploration, presents good communication performance for dense sensor networks, by a probabilistic directed forwarding technique along with random initial direction and border determination. The simulation results uphold the protocol design and show its efficiency on communication overhead and satisfactory detection probability

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

WIRELESS sensor networks (WSNs) have gained a great deal of attention in the past decade due to their wide range of application areas and formidable design challenges. In general, wireless sensor networks consist of hundreds and thousands of low-cost, resource-constrained, distributed sensor nodes, which usually scatter in the surveillance area randomly, working without attendance. If the operation environment is hostile, security mechanisms against adversaries should be taken into consideration. Among many physical attacks to sensor networks, the node clone is a serious and dangerous one. Because of production expense limitation, sensor nodes are generally short of tamper-resistance hardware components; thus, an adversary can capture a few nodes, extract code and all secret credentials, and use those materials to clone many nodes out of off-the-shelf sensor hardware. Those cloned nodes that seem legitimate can freely join the sensor network and then significantly enlarge the adversary’s capacities to manipulate the network maliciously

DISADVANTAGES OF EXISTING SYSTEM:

  • Among many physical attacks to sensor networks, the node clone is a serious and dangerous one.
  • Insufficient storage consumption performance in the existing system and low security level.

PROPOSED SYSTEM:

In this paper, we present two novel, practical node clone detection protocols with different tradeoffs on network conditions and performance.

The first proposal is based on a distributed hash table (DHT) by which a fully decentralized, key-based caching and checking system is constructed to catch cloned nodes. The protocol’s performance on memory consumption and a critical security metric are theoretically deducted through a probability model, and the resulting equations, with necessary adjustment for real application, are supported by the simulations. In accordance with our analysis, the comprehensive simulation results show that the DHT-based protocol can detect node clone with high security level and holds strong resistance against adversary’s attacks.

Our second protocol, named randomly directed exploration, is intended to provide highly efficient communication performance with adequate detection probability for dense sensor networks. In the protocol, initially nodes send claiming messages containing a neighbor-list along with a maximum hop limit to randomly selected neighbors; then, the subsequent message transmission is regulated by a probabilistic directed technique to approximately maintain a line property through the network as well as to incur sufficient randomness for better performance on communication and resilience against adversary. In addition, border determination mechanism is employed to further reduce communication payload. During forwarding, intermediate nodes explore claiming messages for node clone detection. By design, this protocol consumes almost minimal memory, and the simulations show that it outperforms all other detection protocols in terms of communication cost, while the detection probability is satisfactory.

ADVANTAGES OF PROPOSED SYSTEM:

  • The DHT-based protocol can detect node clone with high security level and holds strong resistance against adversary’s attacks.
  • Randomly directed exploration, is intended to provide highly efficient communication performance with adequate detection probability for dense sensor networks.

SYSTEM ARCHITECTURE:

BLOCK DIAGRAM:

Techniques and protocol Used:

  1. Distributed hash table(DHT)
  2. Randomly directed exploration

Distributed hash table (DHT):

Distributed hash table (DHT), by which a fully decentralized, key-based caching and checking system is constructed to catch cloned nodes. The protocol’s performance on memory consumption and a critical security metric are theoretically deducted through a probability model, and the resulting equations, with necessary adjustment for real application, are supported by the simulations. In accordance with our analysis, the comprehensive simulation results show that the DHT-based protocol can detect node clone with high security level and holds strong resistance against adversary’s attacks.

Randomly directed exploration:

This is intended to provide highly efficient communication performance with adequate detection probability for dense sensor networks. In the protocol, initially nodes send claiming messages containing a neighbor-list along with a maximum hop limit to randomly selected neighbors; then, the subsequent message transmission is regulated by a probabilistic directed technique to approximately maintain a line property through the network as well as to incur sufficient randomness for better performance on communication and resilience against adversary. In addition, border determination mechanism is employed to further reduce communication payload. During forwarding, intermediate nodes explore claiming messages for node clone detection. By design, this protocol consumes almost minimal memory, and the simulations show that it outperforms all other detection protocols in terms of communication cost, while the detection probability is satisfactory

MODULES:

  • Setting up Network Model
  • Initialization Process
  • Claiming Neighbor’s information
  • Processing Claiming Message
  • Sink Module
  • Performance Analysis

MODULES DESCRIPTION:

Setting up Network Model

Our first module is setting up the network model. We consider a large-scale, homogeneous sensor network consisting of resource-constrained sensor nodes. Analogous to previous distributed detection approaches; we assume that an identity-based public-key cryptography facility is available in the sensor network. Prior to deployment, each legitimate node is allocated a unique ID and a corresponding private key by a trusted third party. The public key of a node is its ID, which is the essence of an identity-based cryptosystem. Consequently, no node can lie to others about its identity. Moreover, anyone is able to verify messages signed by a node using the identity-based key.The source nodes in our problem formulation serve as storage points which cache the data gathered by other nodes and periodically transmit to the sink, in response to user queries. Such network architecture is consistent with the design of storage centric sensor networks

Initialization Process:

To activate all nodes starting a new round of node clone detection, the initiator uses a broadcast authentication scheme to release an action message including a monotonously increasing nonce, a random round seed, and an action time. The nonce is intended to prevent adversaries from launching a DoS attack by repeating broadcasting action messages.

Claiming neighbor’s information:

Upon receiving an action message, a node verifies if the message nonce is greater than last nonce and if the message signature is valid. If both pass, the node updates the nonce and stores the seed. At the designated action time, the node operates as an observer that generates a claiming message for each neighbor (examinee) and transmits the message through the overlay network with respect to the claiming probability. Nodes can start transmitting claiming messages at the same time, but then huge traffic may cause serious interference and degrade the network capacity. To relieve this problem, we may specify a sending period, during which nodes randomly pick up a transmission time for every claiming message.

 Processing claiming messages:

A claiming message will be forwarded to its destination node via several Chord intermediate nodes. Only those nodes in the overlay network layer (i.e., the source node, Chord intermediate nodes, and the destination node) need to process a message, whereas other nodes along the path simply route the message to temporary targets. Algorithm 1 for handling a message is the kernel of our DHT-based detection protocol. If the algorithm returns NIL, then the message has arrived at its destination. Otherwise, the message will be subsequently forwarded to the next node with the ID that is returned.

Sink Module:

The sink is the point of contact for users of the sensor network. Each time the sink receives a question from a user, it first translates the question into multiple queries and then disseminates the queries to the corresponding mobile relay, which process the queries based on their data and return the query results to the sink. The sink unifies the query results from multiple storage nodes into the final answer and sends it back to the user.

Performance Analysis

For the DHT-based detection protocol, we use the following specific measurements to evaluate its performance:

  • Average number of transmitted messages, representing the protocol’s communication cost;
  • Average size of node cache tables, standing for the protocol’s storage consumption;
  • Average number of witnesses, serving as the protocol’s security level because the detection protocol is deterministic and symmetric.

SYSTEM CONFIGURATION:-

HARDWARE CONFIGURATION:-

ü Processor             –        Pentium –IV

  • Speed –     1 Ghz
  • RAM –     256 MB(min)
  • Hard Disk –      20 GB
  • Key Board –     Standard Windows Keyboard
  • Mouse –     Two or Three Button Mouse
  • Monitor –     SVGA

 

SOFTWARE CONFIGURATION:-

  • Operating System : Windows XP
  • Programming Language : NS2
  • Tool : CYGWIN

REFERENCE:

Zhijun Li, Member, IEEE, and Guang Gong, Senior Member, IEEE “On the Node Clone Detection in Wireless Sensor Networks”- IEEE/ACM TRANSACTIONS ON NETWORKING, 2013.

Detection and Localization of Multiple Spoofing Attackers in Wireless Networks

Detection and Localization of Multiple Spoofing Attackers in Wireless Networks

ABSTRACT:

Wireless spoofing attacks are easy to launch and can significantly impact the performance of networks. Although the identity of a node can be verified through cryptographic authentication, conventional security approaches are not always desirable because of their overhead requirements. In this paper, we propose to use spatial information, a physical property associated with each node, hard to falsify, and not reliant on cryptography, as the basis for 1) detecting spoofing attacks; 2) determining the number of attackers when multiple adversaries masquerading as the same node identity; and 3) localizing multiple adversaries. We propose to use the spatial correlation of received signal strength (RSS) inherited from wireless nodes to detect the spoofing attacks. We then formulate the problem of determining the number of attackers as a multiclass detection problem. Cluster-based mechanisms are developed to determine the number of attackers. When the training data are available, we explore using the Support Vector Machines (SVM) method to further improve the accuracy of determining the number of attackers. In addition, we developed an integrated detection and localization system that can localize the positions of multiple attackers. We evaluated our techniques through two test beds using both an 802.11 (WiFi) network and an 802.15.4 (ZigBee) network in two real office buildings. Our experimental results show that our proposed methods can achieve over 90 percent Hit Rate and Precision when determining the number of attackers. Our localization results using a representative set of algorithms provide strong evidence of high accuracy of localizing multiple adversaries.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

In spite of existing 802.11 security techniques including Wired Equivalent Privacy (WEP), WiFi Protected Access (WPA), or 802.11i (WPA2), such methodology can only protect data frames—an attacker can still spoof management or control frames to cause significant impact on networks. Spoofing attacks can further facilitate a variety of traffic injection attacks, such as attacks on access control lists, rogue access point (AP) attacks, and eventually Denial of-Service (DoS) attacks. A broad survey of possible spoofing attacks can be found. Moreover, in a large-scale network, multiple adversaries may masquerade as the same identity and collaborate to launch malicious attacks such as network resource utilization attack and denial-of-service attack quickly. Therefore, it is important to 1) detect the presence of spoofing attacks, 2) determine the number of attackers, and 3) localize multiple adversaries and eliminate them. Most existing approaches to address potential spoofing attacks employ cryptographic schemes. However, the application of cryptographic schemes requires reliable key distribution, management, and maintenance mechanisms. It is not always desirable to apply these cryptographic methods because of its infrastructural, computational, and management overhead. Further, cryptographic methods are susceptible to node compromise, which is a serious concern as most wireless nodes are easily accessible, allowing their memory to be easily scanned.

DISADVANTAGES OF EXISTING SYSTEM:

  • Among various types of attacks, identity-based spoofing attacks are especially easy to launch and can cause significant damage to network performance.
  • For instance, in an 802.11 network, it is easy for an attacker to gather useful MAC address information during passive monitoring and then modify its MAC address by simply issuing anifconfig command to masquerade as another device.
  • Not self defensive
  • Effective only when implemented by large number of networks
  • Deployment is costly
  • Incentive for an ISP is very low

PROPOSED SYSTEM:

In this work, we propose to use received signal strength (RSS)-based spatial correlation, a physical property associated with each wireless node that is hard to falsify and not reliant on cryptography as the basis for detecting spoofing attacks. Since we are concerned with attackers who have different locations than legitimate wireless nodes, utilizing spatial information to address spoofing attacks has the unique power to not only identify the presence of these attacks but also localize adversaries. An added advantage of employing spatial correlation to detect spoofing attacks is that it will not require any additional cost or modification to the wireless devices themselves. We focus on static nodes in this work, which are common for spoofing scenarios. We addressed spoofing detection in mobile environments in our other work. Faria and Cheriton proposed the use of matching rules of signal prints for spoofing detection, Sheng et al. modeled the RSS readings using a Gaussian mixture model and Chen et al. used RSS and K-means cluster analysis to detect spoofing attacks. However, none of these approaches have the ability to determine the number of attackers when multiple adversaries use the same identity to launch attacks, which is the basis to further localize multiple adversaries after attack detection. Although Chen et al. studied how to localize adversaries, it can only handle the case of a single spoofing attacker and cannot localize the attacker if the adversary uses different transmission power levels.

  • The proposed System used Inter domain Packet filters (IDPFs) architecture, a system that can be constructed solely based on the locally exchanged BGP updates.
  • Each node only selects and propagates to neighbors based on two set of routing policies. They are Import and Export Routing policies.
  • The IDPFs uses a feasible path from source node to the destination node, and a packet can reach to the destination through one of its upstream neighbors.
  • The training data is available, we explore using Support Vector Machines (SVM) method to further improve the accuracy of determining the number of attackers.
  • In localization results using a representative set of algorithms provide strong evidence of high accuracy of localizing multiple adversaries.
  • The Cluster Based wireless Sensor Network data received signal strength (RSS) based spatial correlation of network Strategy.
  • A physical property associated with each wireless device that is hard to falsify and not reliant on cryptography as the basis for detecting spoofing attacks in wireless networks.

ADVANTAGES OF PROPOSED SYSTEM:

  • GADE: a generalized attack detection model (GADE) that can both detect spoofing attacks as well as determine the number of adversaries using cluster analysis methods grounded on RSS-based spatial correlations among normal devices and adversaries
  • IDOL: an integrated detection and localization system that can both detect attacks as well as find the positions of multiple adversaries even when the adversaries vary their transmission power levels.
  • Damage Reduction under SPM Defense is high
  • Client Traffic
  • Comparing to other methods the benefits of SPM are more.
  • SPM is generic because their only goal is to filter spoofed packets.

 MODULES:

  • Network configuration
  • Generalized attack detection model
  • Integrated detection and localization framework
  • Performance evaluation

 MODULES DESCRIPTON:

Network Configuration

The nodes are created and located in the simulation environment. The nodes are moved from one location to another location. The setdest command is used to give the movement to a node. The Random way point mobility model is used in our simulation. The nodes are using Omni-antenna to send and receive the data. The signals are propagated from one location to another location by using Two Ray Ground propagation model. The Priority Queue is maintained between any of the two nodes as the interface Queue.

 Generalized Attack Detection Model

          The Generalized Attack Detection Model consists of two phases: attack detection, which detects the presence of an attack, and number determination, which determines the number of adversaries.

          The challenge in spoofing detection is to devise strategies that use the uniqueness of spatial information, but not using location directly as the attackers’ positions are unknown. RSS property is closely correlated with location in physical space and is readily available in the existing wireless networks. Although affected by random noise, environmental bias, and multipath effects, the RSS measured at a set of landmarks is closely related to the transmitter’s physical location and is governed by the distance to the landmarks. The RSS readings at the same physical location are similar, whereas the RSS readings at different locations in physical space are distinctive. Thus, the RSS readings present strong spatial correlation characteristics.

 Integrated Detection and Localization Framework

          In this module, an integrated system that can both detect spoofing attacks, determine the number of attackers, and localize multiple adversaries.

          The traditional localization approaches are based on averaged RSS from each node identity inputs to estimate the position of a node. However, in wireless spoofing attacks, the RSS stream of a node identity may be mixed with RSS readings of both the original node as well as spoofing nodes from different physical locations. The traditional method of averaging RSS readings cannot differentiate RSS readings from different locations and thus is not feasible for localizing adversaries.

          Different from traditional localization approaches, our integrated detection and localization system utilize the RSS medoids returned from SILENCE as inputs to localization algorithms to estimate the positions of adversaries. The return positions from our system include the location estimate of the original node and the attackers in the physical space. Handling adversaries using different transmission power levels. An adversary may vary the transmission power levels when performing spoofing attacks so that the localization system cannot estimate its location accurately.

Performance Evaluation

The performance of the proposed scheme is evaluated by plotting the graph. The parameter used to evaluate the performance is as follows:

  • False positive Rate
  • Spoofing Detection rate
  • Throughput

 These parameter values are recorded in the trace file during the simulation by using record procedure. The recorded details are stored in the trace file. The trace file is executed by using the Xgraph to get graph as the output.

SYSTEM CONFIGURATION:-

HARDWARE REQUIREMENTS:-

ü Processor             -Pentium –III

  • Speed – 1 Ghz
  • RAM – 256 MB(min)
  • Hard Disk – 20 GB
  • Floppy Drive – 44 MB
  • Key Board – Standard Windows Keyboard
  • Mouse – Two or Three Button Mouse
  • Monitor – SVGA

 

SOFTWARE REQUIREMENTS:-

  • Operating System : WINDOWS XP
  • Front End : NS2
  • TOOL : CYGWIN

REFERENCE:

Jie Yang,Student Member, IEEE, Yingying (Jennifer) Chen, Senior Member, IEEE, Wade Trappe,Member, IEEE, and Jerry Cheng “Detection and Localization of Multiple Spoofing Attackers in Wireless Networks”- IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 24, NO. 1, JANUARY 2013.

Vampire Attacks: Draining Life from Wireless Ad Hoc Sensor Networks

Vampire Attacks: Draining Life from Wireless Ad Hoc Sensor Networks

ABSTRACT:

Ad hoc low-power wireless networks are an exciting research direction in sensing and pervasive computing. Prior security work in this area has focused primarily on denial of communication at the routing or medium access control levels. This paper explores resource depletion attacks at the routing protocol layer, which permanently disable networks by quickly draining nodes’ battery power. These “Vampire” attacks are not specific to any specific protocol, but rather rely on the properties of many popular classes of routing protocols. We find that all examined protocols are susceptible to Vampire attacks, which are devastating, difficult to detect, and are easy to carry out using as few as one malicious insider sending only protocol-compliant messages. In the worst case, a single Vampire can increase network-wide energy usage by a factor of O (N), where N in the number of network nodes. We discuss methods to mitigate these types of attacks, including a new proof-of-concept protocol that provably bounds the damage caused by Vampires during the packet forwarding phase.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

Existing work on secure routing attempts to ensure that adversaries cannot cause path discovery to return an invalid network path, but Vampires do not disrupt or alter discovered paths, instead using existing valid network paths and protocol compliant messages. Protocols that maximize power efficiency are also inappropriate, since they rely on cooperative node behavior and cannot optimize out malicious action.

DISADVANTAGES OF EXISTING SYSTEM:

  • Power outages
  • Due to Environmental disasters, loss in the information
  • Lost productivity
  • Various DOS attacks
  • Secure level is low
  • They do not address attacks that affect long-term availability.

PROPOSED SYSTEM:

This paper makes three primary contributions. First, we thoroughly evaluate the vulnerabilities of existing protocols to routing layer battery depletion attacks. We observe that security measures to prevent Vampire attacks are orthogonal to those used to protect routing infrastructure, and so existing secure routing protocols such as Ariadne, SAODV and SEAD do not protect against Vampire attacks. Existing work on secure routing attempts to ensure that adversaries cannot cause path discovery to return an invalid network path, but Vampires do not disrupt or alter discovered paths, instead using existing valid network paths and protocol-compliant messages. Protocols that maximize power efficiency are also inappropriate, since they rely on cooperative node behavior and cannot optimize out malicious action. Second, we show simulation results quantifying the performance of several representative protocols in the presence of a single Vampire (insider adversary). Third, we modify an existing sensor network routing protocol to provably bound the damage from Vampire attacks during packet forwarding.

 In proposed system we show simulation results quantifying the performance of several representative protocols in the presence of a single Vampire. Then, we modify an existing sensor network routing protocol to provably bound the damage from Vampire attacks during packet forwarding.

ADVANTAGES OF PROPOSED SYSTEM:

  • Protect from the vampire attacks
  • Secure level is high
  • Boost up the Battery power

SYSTEM ARCHITECTURE:

PROBLEMS IDENTIFIED AND CONFIRMED

                  If vampire attack exist in the network, it will affect one node and drain its full energy and the particular node will goes to dead state and then the vampire attack concentrates on next node and so on it affects all nodes in the network, as a result all nodes goes to dead state.

                  The vampire attack permanently disables or destroys the network.

OBJECTIVE AND SCOPE OF THE PROJECT

                  Our proposed project concentrates on securing the network from the malicious attack. Our implementation results in the efficient detection and elimination of vampire attack from the network. In order to detect and eliminating the vampire attack we going to implement certain intrusion detection system based on the energy level constraints.

                    Our simulation result shows the improved network authentication rate and efficient detection of malicious node from the network, so that our proposed system forms a secure network with high throughput rate.

ASSUMPTIONS, CONSTRAINTS AND LIMITATIONS

           in order to show the performance metrics we locate 30 to 50 sensor nodes in the network, let number of sensor nodes be N.

           Then the routing is performed between sensor nodes, let the data packets be 512 bytes and the initial energy level of nodes be 10 joules.

           Let us use the wireless channel type for data routing among the N number of nodes.

          The routing is dine through link layer if link state routing protocol like aodv dsr dsdv

The graphical constraints like throughput, packet delivery ratio, delay are used to evaluate the performance of network

Mac 802.11 and Omni antenna is used for data communication and covering the transmission range.

 PROPOSED METHOD

                                      The proposed system concentrates on a secure data transmission from the adversary nodes in the sensor network. In order to build a secure network, the network should be an extinct to adversary nodes. So we propose a technique called nodes position verification  and node verification intrusion detection techniques [IDS]. The nodes which has the exceed threshold value other than normal nodes, then a node supposed to be a malicious nodes  which will undergoes a vampire attack. By the proposed IDS system we can calculate the threshold value and energy level of malicious nodes, and also by NPA techniques the malicious nodes can be detected efficiently and detected nodes are eliminated from the network which increases the network performance ant throughput rate.

MODULES:

  • Network Configuration Setting
  • Data Routing
  • Vampire Attack
  • Backtracking Technique
  • Intrusion Detection System
  • Malicious Node Elimination
  • Graph Evaluation

MODULES DESCRIPTION:

NODE CONFIGURATION SETTING

           The mobile nodes are designed and configured dynamically, designed to employ across the network, the nodes are set according to the X, Y, Z dimension, which the nodes have the direct transmission range to all other nodes.

DATA ROUTING

              The source and destination are set at larger distance, the source transmits the data packets to destination through the intermediate hop nodes using UDP user data gram protocol, link state routing like PLGP act as an ad hoc routing protocol.

VAMPIRE ATTACK

             The malicious node enters the network, and affects the one of the intermediate node by sending false packets. So the malicious node drain the energy of the intermediate node, the intermediate energy level goes to 0 joules. So the data transmission is affected, the path tends to be failure between source and destination. As a result source retransmits the data in another path to destination. If the vampire attack continues it will disable the whole network.

BACKTRACKING TECHNIQUE

               The back tracking technique is used to identify legitimate nodes in the particular path; the nodes accept the data only after the execution of back tracking technique. If source transmits the data to next neighbor node, the next node verifies the source identity using back tracking process. Through this technique the data is transmitted securely in the presence of vampire nodes.

INTRUSION DETECTION SYSTEM

                  The energy constraint IDS is used to detect the malicious nodes from the network, for that purpose the energy level for all nodes are calculated after every data iteration process. Maximum nodes have an average energy level in certain range, due to the nature of vampire nodes have a abnormal energy level like malicious node energy level is three times more than the average energy level, by this technique the malicious nodes can be identified easily.

MALICIOUS NODE ELIMINATION 

                 After the IDS process the malicious nodes detected. The TA trusted authority informs to all nodes in the network and eliminate the malicious node from the network. So by eliminating malicious node we can form a secure network

GRAPH EXAMINATION

               The performance analysis of the existing and proposed work is examined through graphical analysis.

SYSTEM CONFIGURATION:-

HARDWARE CONFIGURATION:-

ü Processor             –        Pentium –IV

  • Speed –     1 Ghz
  • RAM –     256 MB(min)
  • Hard Disk –      20 GB
  • Key Board –     Standard Windows Keyboard
  • Mouse –     Two or Three Button Mouse
  • Monitor –     SVGA

SOFTWARE CONFIGURATION:-

  • Operating System : Windows XP/LINUX
  • Simulator : NS2
  • Tool : Cygwin

REFERENCE:

Eugene Y. Vasserman and Nicholas Hopper “Vampire Attacks: Draining Life from Wireless Ad Hoc Sensor Networks”- IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 12, NO. 2, FEBRUARY 2013.

A Distributed Control Law for Load Balancing in Content Delivery Networks

A Distributed Control Law for Load Balancing in Content Delivery Networks

ABSTRACT:

In this paper, we face the challenging issue of defining and implementing an effective law for load balancing in Content Delivery Networks (CDNs). We base our proposal on a formal study of a CDN system, carried out through the exploitation of a fluid flow model characterization of the network of servers. Starting from such characterization, we derive and prove a lemma about the network queues equilibrium. This result is then leveraged in order to devise a novel distributed and time-continuous algorithm for load balancing, which is also reformulated in a time-discrete version. The discrete formulation of the proposed balancing law is eventually discussed in terms of its actual implementation in a real-world scenario. Finally, the overall approach is validated by means of simulations.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

In a queue-adjustment strategy, the scheduler is located after the queue and just before the server. The scheduler might assign the request pulled out from the queue to either the local server or a remote server depending on the status of the system queues.

In a rate-adjustment model, instead the scheduler is located just before the local queue: Upon arrival of a new request, the scheduler decides whether to assign it to the local queue or send it to a remote server.

In a hybrid-adjustment strategy for load balancing, the scheduler is allowed to control both the incoming request rate at a node and the local queue length.

Thus in Existing systems, Upon arrival of a new request, indeed, a CDN server can either elaborate locally the request or redirect it to other servers according to a certain decision rule, which is based on the state information exchanged by the servers. Such an approach limits state exchanging overhead to just local servers.

DISADVANTAGES OF EXISTINGS SYSTEM:

A critical component of CDN architecture is the request routing mechanism. It allows to direct users’ requests for content to the appropriate server based on a specified set of parameters. The proximity principle, by means of which a request is always served by the server that is closest to the client, can sometimes fail. Indeed, the routing process associated with a request might take into account several parameters (like traffic load, bandwidth, and servers’ computational capabilities) in order to provide the best performance in terms of time of service, delay, etc. Furthermore, an effective request routing mechanism should be able to face temporary, and potentially localized, high request rates (the so-called flash crowds) in order to avoid affecting the quality of service perceived by other users.

PROPOSED SYSTEM:

In a similar way, in this paper we first design a suitable load-balancing law that assures equilibrium of the queues in a balanced CDN by using a fluid flow model for the network of servers. Then, we discuss the most notable implementation issues associated with the proposed load-balancing strategy.

We present a new mechanism for redirecting incoming client requests to the most appropriate server, thus balancing the overall system requests load. Our mechanism leverages local balancing in order to achieve global balancing. This is carried out through a periodic interaction among the system nodes.

ADVANTAGES OF PROPOSED SYSTEM:

The quality of our solution can be further appreciated by analyzing the performance parameters

The proposed mechanism also exhibits an excellent average Response Time, which is only comparable to the value obtained by the 2RC algorithm.

The excellent performance of our mechanism might be paid in terms of a significant number of redirections. Since the redirection process is common to all the algorithms analyzed, we exclusively evaluate the percentage of requests redirected more than once over the total number of requests generated.

ALGORITHM USED:

Distributed Load-Balancing Algorithm

SYSTEM ARCHITECTURE:

MODULES:

  • Client Request
  • Server
  • Creating Load
  • Fluid Queue Model
  • Load balance

MODULES DESCRIPTION:

Client Request

In this module, we design the system, such that client makes request to server.

 Server

In this module we design the Server System, where the server processes the client request.

Creating Load

In this module, we create the load to the server.

Fluid Queue Model

In this paper we first design a suitable load-balancing law that assures equilibrium of the queues in a balanced CDN by using a fluid flow model for the network of servers. In a queue-adjustment strategy, the scheduler is located after the queue and just before the server. The scheduler might assign the request pulled out from the queue to either the local server or a remote server depending on the status of the system queues: If an unbalancing exists in the network with respect to the local server, it might assign part of the queued requests to the most unloaded remote server. In this way, the algorithm tries to equally balance the requests in the system queues. It is clear that in order to achieve an effective load balancing, the scheduler needs to

periodically retrieve information about remote queue lengths.

Load balance

We present a new mechanism for redirecting incoming client requests to the most appropriate server, thus balancing the overall system requests load. Our mechanism leverages local balancing in order to achieve global balancing. This is carried out through a periodic interaction among the system nodes.

SYSTEM CONFIGURATION:-

HARDWARE REQUIREMENTS:-

ü Processor                  –        Pentium –IV

  • Speed – 1 Ghz
  • RAM – 512 MB(min)
  • Hard Disk – 40 GB
  • Key Board – Standard Windows Keyboard
  • Mouse – Two or Three Button Mouse
  • Monitor –        LCD/LED

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows XP.
  • Coding Language : NS2
  • Tool :         CYGWIN

REFERENCE:

Sabato Manfredi, Francesco Oliviero, and Simon Pietro Romano,A Distributed Control Law for Load Balancing in Content Delivery Networks”, IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 21, NO. 1, FEBRUARY 2013.

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks With Order-Optimal Per-Flow Delay

A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks With Order-Optimal Per-Flow Delay

ABSTRACT:

Quantifying the end-to-end delay performance in multihop wireless networks is a well-known challenging problem. In this paper, we propose a new joint congestion control and scheduling algorithm for multihop wireless networks with fixed-route flows operated under a general interference model with interference degree . Our proposed algorithm not only achieves a provable throughput guarantee (which is close to at least of the system capacity region), but also leads to explicit upper bounds on the end-to-end delay of every flow. Our end-to-end delay and throughput bounds are in simple and closed forms, and they explicitly quantify the tradeoff between throughput and delay of every flow. Furthermore, the per-flow end-to-end delay bound increases linearly with the number of hops that the flow passes through, which is order-optimal with respect to the number of hops. Unlike traditional solutions based on the back-pressure algorithm, our proposed algorithm combines window-based flow control with a new rate-based distributed scheduling algorithm. A key contribution of our work is to use a novel stochastic dominance approach to bound the corresponding per-flow throughput and delay, which otherwise are often intractable in these types of systems. Our proposed algorithm is fully distributed and requires a low per-node complexity that does not increase with the network size. Hence, it can be easily implemented in practice.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram : 1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows XP/UBUNTU.
  • Implementation : NS2
  • NS2 Version : 2.28
  • Front End : OTCL (Object Oriented Tool Command  Language)
  • Tool : Cygwin (To simulate in Windows OS)

REFERENCE:

Po-Kai Huang, Member, IEEE, Xiaojun Lin, Member, IEEE, and Chih-Chun Wang, Member, IEEE, “A Low-Complexity Congestion Control and Scheduling Algorithm for Multihop Wireless Networks With Order-Optimal Per-Flow Delay”, IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 21, NO. 2, APRIL 2013.

Mobile Relay Configuration in Data-Intensive Wireless Sensor Networks

Mobile Relay Configuration in Data-Intensive Wireless Sensor Networks

ABSTRACT:

Wireless Sensor Networks (WSNs) are increasingly used in data-intensive applications such as microclimate monitoring, precision agriculture, and audio/video surveillance. A key challenge faced by data-intensive WSNs is to transmit all the data generated within an application’s lifetime to the base station despite the fact that sensor nodes have limited power supplies. We propose using lowcost disposable mobile relays to reduce the energy consumption of data-intensive WSNs. Our approach differs from previous work in two main aspects. First, it does not require complex motion planning of mobile nodes, so it can be implemented on a number of low-cost mobile sensor platforms. Second, we integrate the energy consumption due to both mobility and wireless transmissions into a holistic optimization framework. Our framework consists of three main algorithms. The first algorithm computes an optimal routing tree assuming no nodes can move. The second algorithm improves the topology of the routing tree by greedily adding new nodes exploiting mobility of the newly added nodes. The third algorithm improves the routing tree by relocating its nodes without changing its topology. This iterative algorithm converges on the optimal position for each node given the constraint that the routing tree topology does not change. We present efficient distributed implementations for each algorithm that require only limited, localized synchronization. Because we do not necessarily compute an optimal topology, our final routing tree is not necessarily optimal. However, our simulation results show that our algorithms significantly outperform the best existing solutions.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

A key challenge faced by data-intensive WSNs is to minimize the energy consumption of sensor nodes so that all the data generated within the lifetime of the application can be transmitted to the base station. Several different approaches have been proposed to significantly reduce the energy cost of WSNs by using the mobility of nodes. A robotic unit may move around the network and collect data from static nodes through one-hop or multihop transmissions. The mobile node may serve as the base station or a “data mule” that transports data between static nodes and the base station. Mobile nodes may also be used as relays that forward data from source nodes to the base station. Several movement strategies for mobile relays have been studied.

DISADVANTAGES OF EXISTING SYSTEM:

  • First, the movement cost of mobile nodes is not accounted for in the total network energy consumption. Instead, mobile nodes are often assumed to have replenishable energy supplies which are not always feasible due to the constraints of the physical environment.
  • Second, complex motion planning of mobile nodes is often assumed in existing solutions which introduces significant design complexity and manufacturing costs.
  • In mobile nodes need to repeatedly compute optimal motion paths and change their location, their orientation and/or speed of movement. Such capabilities are usually not supported by existing low-cost mobile sensor platforms.

PROPOSED SYSTEM:

In this paper, we use low-cost disposable mobile relays to reduce the total energy consumption of data-intensive WSNs. Different from mobile base station or data mules, mobile relays do not transport data; instead, they move to different locations and then remain stationary to forward data along the paths from the sources to the base station. Thus, the communication delays can be significantly reduced compared with using mobile sinks or data mules. Moreover, each mobile node performs a single relocation unlike other approaches which require repeated relocations.

ADVANTAGES OF PROPOSED SYSTEM:

  • Our approach takes advantage of this capability by assuming that we have a large number of mobile relay nodes.
  • On the other hand, due to low manufacturing cost, existing mobile sensor platforms are typically powered by batteries and only capable of limited mobility.
  • Consistent with this constraint, our approach only requires one-shot relocation to designated positions after deployment. Compared with our approach, existing mobility approaches typically assume a small number of powerful mobile nodes, which does not exploit the availability of many low-cost mobile nodes

SYSTEM ARCHITECTURE:

MODULES:

  • Network creation Module (wireless sensor networks)
  • Optimal mobile relay configuration
  • Mobile sink &source nodes
  • Routing tree optimization
  • Energy optimization and secret sharing random propagation
  • Performance comparison

MODULES DESCRIPTION:

Network Creation Module (Wireless sensor Networks)

  • In this module first we deploy the network which composed of many small nodes deployed in an ad hoc fashion. Most communication will be between nodes as peers, rather than to a single base station.
  • Nodes must self-configure. Dedicated to a single application or a few collaborative applications. Involves in-network processing to reduce traffic and thereby increase the life-time
  • This implies that data will be processed as whole messages at a time in store-and-forward fashion. Hence packet or fragment-level interleaving from multiple sources only delays overall latency
  • Applications will have long idle periods and can tolerate some latency

Optimal mobile relay configuration:

The network consists of mobile relay nodes along with static base station and data sources. Relay nodes do not transport data; instead, they move to different locations to decrease the transmission costs. We use the mobile relay approach in this work. We showed that an iterative mobility algorithm where each relay node moves to the midpoint of its neighbors converges on the optimal solution for a single routing path. However, they do not account for the cost of moving the relay nodes. In mobile nodes decide to move only when moving is beneficial, but the only position considered is the midpoint of neighbors.

Mobile sink & Source nodes:

The sink is the point of contact for users of the sensor network. Each time the sink receives a question from a user, it first translates the question into multiple queries and then disseminates the queries to the corresponding mobile relay, which process the queries based on their data and return the query results to the sink. The sink unifies the query results from multiple storage nodes into the final answer and sends it back to the user.

The source nodes in our problem formulation serve as storage points which cache the data gathered by other nodes and periodically transmit to the sink, in response to user queries. Such network architecture is consistent with the design of storage centric sensor networks. Our problem formulation also considers the initial positions of nodes and the amount of data that needs to be transmitted from each storage node to the sink.

Routing tree optimization:

We consider the sub problem of finding the optimal positions of relay nodes for a routing tree given that the topology is fixed. We assume the topology is a directed tree in which the leaves are sources and the root is the sink. We also assume that separate messages cannot be compressed or merged; that is, if two distinct messages of lengths m1 and m2 use the same link (si, sj ) on the path from a source to a sink, the total number of bits that must traverse link (si, sj ) is m1 + m2.

Energy Optimization and Secret Sharing Random Propagation

In this module, We focus on reducing the total energy consumption due to transmissions and mobility. Such a holistic objective of energy conservation is motivated by the fact that mobile relays act the same as static forwarding nodes after movement.

We propose using low-cost disposable mobile relays to reduce the energy consumption of data-intensive WSNs.

Our approach differs from previous work in two main aspects. First, it does not require complex motion planning of mobile nodes, so it can be implemented on a number of low-cost mobile sensor platforms. Second, we integrate the energy consumption due to both mobility and wireless transmissions into a holistic optimization framework.

In this section, we consider the problem of deciding the parameters for secret sharing (M) and random propagation (N) to achieve a desired security performance. To obtain the maximum protection of the information, the threshold parameter should be set as T = M. Then, increasing the number of propagation steps (N) and increasing the number of shares a packet is broken into (M) has a similar effect on reducing the message interception probability.

Specifically, to achieve a given P(max) S for a packet, we could either break the packet into more shares but restrict the random propagation of these shares within a smaller range, or break the packet into fewer shares but randomly propagate these shares into a larger range. Therefore, when the security performance is concerned, a tradeoff relationship exists between the parameters M and P.

Performance Comparison

In this module the performance is analyzed and compared through graph. The parameters used for analysis are energy consumption, power, delay, PDR, greedy routing, hope based, centralized and distributed on basis of time.  Optimal mobile relay configuration networks uses routing topology based on power-law for searching. The performance comparison is done to compare the searching efficiency of energy consumption in the OMRC above parameters.

ALGORITHM DESCRIPTION:

The first algorithm computes an optimal routing tree assuming no nodes can move. The second algorithm improves the topology of the routing tree by greedily adding new nodes exploiting mobility of the newly added nodes. The third algorithm improves the routing tree by relocating its nodes without changing its topology. This iterative algorithm converges on the optimal position for each node given the constraint that the routing tree topology does not change.

Iterative Algorithm for Energy Distribution

Considering the optimization algorithm discussed in this section, now we summarize the overall energy distribution process as follows.

  • Through the source-initiated routing process, the source node gets to know all needed information such as the number of available paths, the number of hops for each path, and the per-hop distance.
  • At the source node, the iterative algorithm is performed for energy distribution. Then, for every selected path, the source node sends out messages with two additional fields in the packet header. One field is to specify the optimal overall energy along this path Ei, and the other is to specify Gi .

3) For each intermediate node along path , it has recorded its distance to its next hop in the path during the routing process. According to, it calculates its transmitting energy as follows:

Eij = (Gij/Gi)*Ei

Then, it transmits packets with energy per bit.

4) At the destination, after it receives a copy of a packet, it first checks whether this copy is correct or not. If correct, this packet will be passed to the application layer without any delay. Otherwise, it will wait for other copies and do packet combining to recover the original packet.

  • Source node selects some paths
  • Calculates the optimal transmitting power for each node along the selected paths
  • Destination receives all copies of the packet
  • It performs packet combining to recover the original packet.

SYSTEM CONFIGURATION:-

HARDWARE CONFIGURATION:-

ü Processor             –        Pentium –IV

  • Speed –     1 Ghz
  • RAM –     256 MB(min)
  • Hard Disk –      20 GB
  • Key Board –     Standard Windows Keyboard
  • Mouse –     Two or Three Button Mouse
  • Monitor –     SVGA

 

SOFTWARE CONFIGURATION:-

  • Operating System : LINUX / WINDOWS XP
  • SIMULATOR : Network Simulator-2
  • Front End : OTCL (Object Oriented Tool Command  Language)
  • Tool : Cygwin (For Windows)

REFERENCE:

Fatme El-Moukaddem, Eric Torng, and Guoliang Xing,Member, IEEE “Mobile Relay Configuration in Data-Intensive Wireless Sensor Networks”- IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 12, NO. 2, FEBRUARY 2013.