Participatory Privacy: Enabling Privacy in Participatory Sensing

Participatory Privacy: Enabling Privacy in Participatory Sensing

 

ABSTRACT:

Participatory sensing is an emerging computing paradigm that enables the distributed collection of data by self-selected participants. It allows the increasing number of mobile phone users to share local knowledge acquired by their sensor-equipped devices (e.g., to monitor temperature, pollution level, or consumer pricing information). While research initiatives and prototypes proliferate, their real-world impact is often bounded to comprehensive user participation. If users have no incentive, or feel that their privacy might be endangered, it is likely that they will not participate. In this article, we focus on privacy protection in participatory sensing and introduce a suitable privacy-enhanced infrastructure. First, we provide a set of definitions of privacy requirements for both data producers (i.e., users providing sensed information) and consumers (i.e., applications accessing the data). Then we propose an efficient solution designed for mobile phone users, which incurs very low overhead. Finally, we discuss a number of open problems and possible research directions.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

In the last few years, PS initiatives have multiplied, ranging from research prototypes to deployed systems. Due to space limitations we briefly review some PS application that apparently expose participant privacy (location, habits, etc.). Each of them can easily be enhanced with our privacy-protecting layer.

 

DISADVANTAGES OF EXISTING SYSTEM:

Privacy in participatory sensing relying on weak assumptions: they attempted to protect anonymity of mobile nodes through the use of Mix Networks. (A Mix Network is a statistical-based anonymizing infrastructure that provides k-anonymity; i.e., an adversary cannot tell a user from a set of k.) However, Mix Networks are unsuitable for many PS settings. They do not attain provable privacy guarantees and assume the presence of a ubiquitous WiFi infrastructure used by mobile nodes, whereas PS applications do leverage the increasing use of broadband 3G/4G connectivity. In fact, a ubiquitous presence of open WiFi networks is not realistic today or anticipated in the near future.

PROPOSED SYSTEM:

We now present our innovative solution for a Privacy-Enhanced Participatory Sensing Infrastructure (PEPSI). PEPSI protects privacy using efficient cryptographic tools. Similar to other cryptographic solutions, it introduces an additional (offline) entity, the registration authority. It sets up system parameters and manages mobile nodes or queriers registration. However, the registration authority is not involved in real-time operations (e.g., query/report matching); nor is it trusted to intervene for protecting participants’ privacy.

 

PEPSI allows the service provider to perform report/query matching while guaranteeing the privacy of both mobile nodes and queriers. It aims at providing (provable) privacy by design, and starts off with defining a clear set of privacy properties.

 

ADVANTAGES OF PROPOSED SYSTEM:

• Secure encryption of reports and queries

• Efficient and oblivious matching by the service provider

 

SYSTEM CONFIGURATION:-

HARDWARE CONFIGURATION:-

 

üProcessor                     Pentium –IV

üSpeed                                     1.1 Ghz

üRAM                            256 MB(min)

üHard Disk                    20 GB

üKey Board                    Standard Windows Keyboard

üMouse                          Two or Three Button Mouse

üMonitor                        SVGA

 

SOFTWARE CONFIGURATION:-

 

üOperating System                    : Windows XP

üProgramming Language           : JAVA

üJava Version                           : JDK 1.6 & above.

 

REFERENCE:

Emiliano De Cristofaro, Palo Alto Research Center (PARC) Claudio Soriente, ETH Zurich, Switzerland,Participatory Privacy: Enabling Privacy in Participatory Sensing”, IEEE Network January/February 2013.

Optimizing Cloud Resources for Delivering IPTV Services Through Virtualization

Optimizing Cloud Resources for Delivering IPTV Services Through Virtualization

 

ABSTRACT:

Virtualized cloud-based services can take advantage of statistical multiplexing across applications to yield significant cost savings. However, achieving similar savings with real-time services can be a challenge. In this paper, we seek to lower a provider’s costs for real-time IPTV services through a virtualized IPTV architecture and through intelligent time-shifting of selected services. Using Live TV and Video-on-Demand (VoD) as examples, we show that we can take advantage of the different deadlines associated with each service to effectively multiplex these services. We provide a generalized framework for computing the amount of resources needed to support multiple services, without missing the deadline for any service.We construct the problem as an optimization formulation that uses a generic cost function. We consider multiple forms for the cost function (e.g., maximum, convex and concave functions) reflecting the cost of providing the service. The solution to this formulation gives the number of servers needed at different time instants to support these services. We implement a simple mechanism for time-shifting scheduled jobs in a simulator and study the reduction in server load using real traces from an operational IPTV network. Our results show that we are able to reduce the load by (compared to a possible as predicted by the optimization framework).

 

 

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

 

EXISTING SYSTEM:

Servers in the VHO serve VoD using unicast, while Live TV is typically multicast from servers using IP Multicast. When users change channels while watching live TV, we need to provide additional functionality so that the channel change takes effect quickly. For each channel change, the user has to join the multicast group associated with the channel, and wait for enough data to be buffered before the video is displayed; this can take some time. As a result, there have been many attempts to support instant channel change by mitigating the user perceived channel switching latency

DISADVANTAGES OF EXISTING SYSTEM:

]More Waiting Time

]More Switching latency

]Not Cost effective

PROPOSED SYSTEM:

We propose a) To use a cloud computing infrastructure with virtualization to handle the combined workload of multiple services flexibly and dynamically, b) To either advance or delay one service when we anticipate a change in the workload of another service, and c) To provide a general optimization framework for computing the amount of resources to support multiple services without missing the deadline for any service.

 

ADVANTAGES OF PROPOSED SYSTEM:

In this paper, we consider two potential strategies for serving VoD requests. The first strategy is a postponement based strategy. In this strategy, we assume that each chunk for VoD has a deadline seconds after the request for that chunk. This would let the user play the content up to seconds after the request. The second strategy is an advancement based strategy. In this strategy, we assume that requests for all chunks in the VoD content are made when the user requests the content. Since all chunks are requested at the start, the deadline for each chunk is different with the first chunk having deadline of zero, the second chunk having deadline of one and so on. With this request pattern, the server can potentially deliver huge amount of content for the user in the same time instant violating downlink bandwidth constraint

SYSTEM CONFIGURATION:-

HARDWARE CONFIGURATION:-

 

üProcessor                     Pentium –IV

üSpeed                                     1.1 Ghz

üRAM                            256 MB(min)

üHard Disk                    20 GB

üKey Board                    Standard Windows Keyboard

üMouse                          Two or Three Button Mouse

üMonitor                        SVGA

 

SOFTWARE CONFIGURATION:-

 

üOperating System                    : Windows XP

üProgramming Language           : JAVA/J2EE.

üJava Version                           : JDK 1.6 & above.

üDatabase                                 : MYSQL

 

REFERENCE:

Vaneet Aggarwal, Member, IEEE, Vijay Gopalakrishnan, Member, IEEE, Rittwik Jana, Member, IEEE, K. K. Ramakrishnan, Fellow, IEEE, and Vinay A. Vaishampayan, Fellow, IEEE, “Optimizing Cloud Resources for Delivering IPTV Services Through Virtualization”, IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 15, NO. 4, JUNE 2013.

Crowdsourcing Predictors of Behavioral Outcomes

Crowdsourcing Predictors of Behavioral Outcomes

ABSTRACT:

Generating models from large data sets—and determining which subsets of data to mine—is becoming increasingly automated. However, choosing what data to collect in the first place requires human intuition or experience, usually supplied by a domain expert. This paper describes a new approach to machine science which demonstrates for the first time that nondomain experts can collectively formulate features and provide values for those features such that they are predictive of some behavioral outcome of interest. This was accomplished by building a Web platform in which human groups interact to both respond to questions likely to help predict a behavioral outcome and pose new questions to their peers. This results in a dynamically growing online survey, but the result of this cooperative behavior also leads to models that can predict the user’s outcomes based on their responses to the user-generated survey questions. Here, we describe two Web-based experiments that instantiate this approach: The first site led to models that can predict users’ monthly electric energy consumption, and the other led to models that can predict users’ body mass index. As exponential increases in content are often observed in successful online collaborative communities, the proposed methodology may, in the future, lead to similar exponential rises in discovery and insight into the causal factors of behavioral outcomes.

 

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

Statistical tools such as multiple regression or neural networks provide mature methods for computing model parameters when the set of predictive covariates and the model structure are prespecified. Furthermore, recent research is providing new tools for inferring the structural form of nonlinear predictive models, given good input and output data.

 

DISADVANTAGES OF EXISTING SYSTEM:

THERE ARE many problems in which one seeks to develop predictive models to map between a set of predictor variables and an outcome.

One aspect of the scientific method that has not yet yielded to automation is the selection of variables for which data should be collected to evaluate hypotheses. In the case of a prediction problem, machine science is not yet able to select the independent variables that might predict an outcome of interest, and for which data collection is required

 

PROPOSED SYSTEM:

The goal of this research was to test an alternative approach to modeling in which the wisdom of crowds is harnessed to both propose which potentially predictive variables to study by asking questions and to provide the data by responding to those questions. The result is a crowd sourced predictive model.

This paper introduces, for the first time, a method by which non-domain experts can be motivated to formulate independent variables as well as populate enough of these variables for successful modeling. In short, this is accomplished as follows. Users arrive at a Web site in which a behavioral outcome [such as household electricity usage or body mass index (BMI)] is to be modeled. Users provide their own outcome (such as their own BMI) and then answer questions that may be predictive of that outcome (such as “how often per week do you exercise”). Periodically, models are constructed against the growing data set that predict each user’s behavioral outcome. Users may also pose their own questions that, when answered by other users, become new independent variables in the modeling process. In essence, the task of discovering and populating predictive independent variables is outsourced to the user community.

ADVANTAGES OF PROPOSED SYSTEM:

Participants successfully uncovered at least one statistically significant predictor of the outcome variable. For the BMI outcome, the participants successfully formulated many of the correlates known to predict BMI and provided sufficiently honest values for those correlates to become predictive during the experiment. While, our instantiations focus on energy and BMI, the proposed method is general and might, as the method improves, be useful to answer many difficult questions regarding why some outcomes are different than others.

 

SYSTEM CONFIGURATION:-

HARDWARE CONFIGURATION:-

 

üProcessor                     Pentium –IV

üSpeed                                     1.1 Ghz

üRAM                            256 MB(min)

üHard Disk                    20 GB

üKey Board                    Standard Windows Keyboard

üMouse                          Two or Three Button Mouse

üMonitor                        SVGA

 

SOFTWARE CONFIGURATION:-

 

üOperating System                    : Windows XP

üProgramming Language           : JAVA/J2EE.

üJava Version                           : JDK 1.6 & above.

üDatabase                                 : MYSQL

 

REFERENCE:

Josh C. Bongard, Member, IEEE, Paul D. H. Hines, Member, IEEE, Dylan Conger, Peter Hurd, and Zhenyu Lu, “Crowdsourcing Predictors of Behavioral Outcomes”, IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. 43, NO. 1, JANUARY 2013.

Mobile Relay Configuration in Data-Intensive Wireless Sensor Networks

Mobile Relay Configuration in Data-Intensive Wireless Sensor Networks

ABSTRACT:

 

Wireless Sensor Networks (WSNs) are increasingly used in data-intensive applications such as microclimate monitoring, precision agriculture, and audio/video surveillance. A key challenge faced by data-intensive WSNs is to transmit all the data generated within an application’s lifetime to the base station despite the fact that sensor nodes have limited power supplies. We propose using lowcost disposable mobile relays to reduce the energy consumption of data-intensive WSNs. Our approach differs from previous work in two main aspects. First, it does not require complex motion planning of mobile nodes, so it can be implemented on a number of low-cost mobile sensor platforms. Second, we integrate the energy consumption due to both mobility and wireless transmissions into a holistic optimization framework. Our framework consists of three main algorithms. The first algorithm computes an optimal routing tree assuming no nodes can move. The second algorithm improves the topology of the routing tree by greedily adding new nodes exploiting mobility of the newly added nodes. The third algorithm improves the routing tree by relocating its nodes without changing its topology. This iterative algorithm converges on the optimal position for each node given the constraint that the routing tree topology does not change. We present efficient distributed implementations for each algorithm that require only limited, localized synchronization. Because we do not necessarily compute an optimal topology, our final routing tree is not necessarily optimal. However, our simulation results show that our algorithms significantly outperform the best existing solutions.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

A key challenge faced by data-intensive WSNs is to minimize the energy consumption of sensor nodes so that all the data generated within the lifetime of the application can be transmitted to the base station. Several different approaches have been proposed to significantly reduce the energy cost of WSNs by using the mobility of nodes. A robotic unit may move around the network and collect data from static nodes through one-hop or multihop transmissions. The mobile node may serve as the base station or a “data mule” that transports data between static nodes and the base station. Mobile nodes may also be used as relays that forward data from source nodes to the base station. Several movement strategies for mobile relays have been studied.

DISADVANTAGES OF EXISTING SYSTEM:

ØFirst, the movement cost of mobile nodes is not accounted for in the total network energy consumption. Instead, mobile nodes are often assumed to have replenishable energy supplies which are not always feasible due to the constraints of the physical environment.

ØSecond, complex motion planning of mobile nodes is often assumed in existing solutions which introduces significant design complexity and manufacturing costs.

ØIn mobile nodes need to repeatedly compute optimal motion paths and change their location, their orientation and/or speed of movement. Such capabilities are usually not supported by existing low-cost mobile sensor platforms.

 

PROPOSED SYSTEM:

In this paper, we use low-cost disposable mobile relays to reduce the total energy consumption of data-intensive WSNs. Different from mobile base station or data mules, mobile relays do not transport data; instead, they move to different locations and then remain stationary to forward data along the paths from the sources to the base station. Thus, the communication delays can be significantly reduced compared with using mobile sinks or data mules. Moreover, each mobile node performs a single relocation unlike other approaches which require repeated relocations.

ADVANTAGES OF PROPOSED SYSTEM:

ØOur approach takes advantage of this capability by assuming that we have a large number of mobile relay nodes.

ØOn the other hand, due to low manufacturing cost, existing mobile sensor platforms are typically powered by batteries and only capable of limited mobility.

ØConsistent with this constraint, our approach only requires one-shot relocation to designated positions after deployment. Compared with our approach, existing mobility approaches typically assume a small number of powerful mobile nodes, which does not exploit the availability of many low-cost mobile nodes

SYSTEM CONFIGURATION:-

HARDWARE CONFIGURATION:-

 

üProcessor                     Pentium –IV

üSpeed                                     1.1 Ghz

üRAM                            256 MB(min)

üHard Disk                    20 GB

üKey Board                    Standard Windows Keyboard

üMouse                          Two or Three Button Mouse

üMonitor                        SVGA

 

SOFTWARE CONFIGURATION:-

                          

üOperating System                    : Windows XP

üProgramming Language           : JAVA

üJava Version                           : JDK 1.6 & above.

üDatabase                                 : MYSQL

 

REFERENCE:

Fatme El-Moukaddem, Eric Torng, and Guoliang Xing,Member, IEEE “Mobile Relay Configuration in Data-Intensive Wireless Sensor Networks”- IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 12, NO. 2, FEBRUARY 2013.

Toward a Statistical Framework for Source Anonymity in Sensor Networks

Toward a Statistical Framework for Source Anonymity in Sensor Networks

ABSTRACT:

In certain applications, the locations of events reported by a sensor network need to remain anonymous. That is, unauthorized observers must be unable to detect the origin of such events by analyzing the network traffic. Known as the source anonymity problem, this problem has emerged as an important topic in the security of wireless sensor networks, with variety of techniques based on different adversarial assumptions being proposed. In this work, we present a new framework for modeling, analyzing, and evaluating anonymity in sensor networks. The novelty of the proposed framework is twofold: first, it introduces the notion of “interval indistinguishability” and provides a quantitative measure to model anonymity in wireless sensor networks; second, it maps source anonymity to the statistical problem of binary hypothesis testing with nuisance parameters. We then analyze existing solutions for designing anonymous sensor networks using the proposed model. We show how mapping source anonymity to binary hypothesis testing with nuisance parameters leads to converting the problem of exposing private source information into searching for an appropriate data transformation that removes or minimize the effect of the nuisance information. By doing so, we transform the problem from analyzing real-valued sample points to binary codes, which opens the door for coding theory to be incorporated into the study of anonymous sensor networks. Finally, we discuss how existing solutions can be modified to improve their anonymity.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

 

EXISTING SYSTEM:

While transmitting the “description” of a sensed event in a private manner can be achieved via encryption primitives, hiding the timing and spatial information of reported events cannot be achieved via cryptographic means.

Encrypting a message before transmission, for instance, can hide the context of the message from unauthorized observers, but the mere existence of the ciphertext is indicative of information transmission.

In the existing literature, the source anonymity problem has been addressed under two different types of adversaries, namely, local and global adversaries. A local adversary is defined to be an adversary having limited mobility and partial view of the network traffic. Routing based techniques have been shown to be effective in hiding the locations of reported events against local adversaries.

A global adversary is defined to be an adversary with ability to monitor the traffic of the entire network (e.g., coordinating adversaries spatially distributed over the network). Against global adversaries, routing based techniques are known to be ineffective in concealing location information in event-triggered transmission. This is due to the fact that, since a global adversary has full spatial view of the network, it can immediately detect the origin and time of the event-triggered transmission

DISADVANTAGES OF EXISTING SYSTEM:

The source anonymity problem in wireless sensor networks is the problem of studying techniques that provide time and location privacy for events reported by sensor nodes. (Time and location privacy will be used interchangeably with source anonymity throughout the paper.)

The source anonymity problem has been drawing increasing research attention recently.

PROPOSED SYSTEM:

In this paper, we investigate the problem of statistical source anonymity in wireless sensor networks. The main contributions of this paper can be summarized by the following points.

We introduce the notion of “interval in-distinguishability” and illustrate how the problem of statistical source anonymity can be mapped to the problem of interval indistinguishability.

We propose a quantitative measure to evaluate statistical source anonymity in sensor networks.

We map the problem of breaching source anonymity to the statistical problem of binary hypothesis testing with nuisance parameters.

We demonstrate the significance of mapping the problem in hand to a well-studied problem in uncovering hidden vulnerabilities. In particular, realizing that the SSA problem can be mapped to the hypothesis testing with nuisance parameters implies that breaching source anonymity can be converted to finding an appropriate data transformation that removes the nuisance information.

We analyze existing solutions under the proposed model. By finding a transformation of observed data,we convert the problem from analyzing real-valued samples to binary codes and identify a possible anonymity breach in the current solutions for the SSA problem.

We pose and answer the important research question of why previous studies were unable to detect the possible anonymity breach identified in this paper.

We discuss, by looking at the problem as a coding problem, a new direction to enhance the anonymity of existing SSA solutions.

ADVANTAGES OF PROPOSED SYSTEM:

Removes or minimize the effect of the nuisance information

SYSTEM CONFIGURATION:-

HARDWARE CONFIGURATION:-

üProcessor                     Pentium –IV

üSpeed                                     1.1 Ghz

üRAM                            256 MB(min)

üHard Disk                    20 GB

üKey Board                    Standard Windows Keyboard

üMouse                          Two or Three Button Mouse

üMonitor                        SVGA

 

SOFTWARE CONFIGURATION:-

üOperating System                    : Windows XP

üProgramming Language           : JAVA

üJava Version                           : JDK 1.6 & above.

 

REFERENCE:

Basel Alomair, Member, IEEE, Andrew Clark, Student Member, IEEE, Jorge Cuellar, and Radha Poovendran, Senior Member, IEEE, “Toward a Statistical Framework for Source Anonymity in Sensor Networks”, IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 12, NO. 2, FEBRUARY 2013.

Load Rebalancing for Distributed File Systems in Clouds

Load Rebalancing for Distributed File Systems in Clouds

ABSTRACT:

Distributed file systems are key building blocks for cloud computing applications based on the MapReduce programming paradigm. In such file systems, nodes simultaneously serve computing and storage functions; a file is partitioned into a number of chunks allocated in distinct nodes so that MapReduce tasks can be performed in parallel over the nodes. However, in a cloud computing environment, failure is the norm, and nodes may be upgraded, replaced, and added in the system. Files can also be dynamically created, deleted, and appended. This results in load imbalance in a distributed file system; that is, the file chunks are not distributed as uniformly as possible among the nodes. Emerging distributed file systems in production systems strongly depend on a central node for chunk reallocation. This dependence is clearly inadequate in a large-scale, failure-prone environment because the central load balancer is put under considerable workload that is linearly scaled with the system size, and may thus become the performance bottleneck and the single point of failure. In this paper, a fully distributed load rebalancing algorithm is presented to cope with the load imbalance problem. Our algorithm is compared against a centralized approach in a production system and a competing distributed solution presented in the literature. The simulation results indicate that our proposal is comparable with the existing centralized approach and considerably outperforms the prior distributed algorithm in terms of load imbalance factor, movement cost, and algorithmic overhead. The performance of our proposal implemented in the Hadoop distributed file system is further investigated in a cluster environment.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

State-of-the-art distributed file systems (e.g., Google GFS and Hadoop HDFS) in clouds rely on central nodes to manage the metadata information of the file systems and to balance the loads of storage nodes based on that metadata. The centralized approach simplifies the design and implementation of a distributed file system. However, recent experience concludes that when the number of storage nodes, the number of files and the number of accesses to files increase linearly, the central nodes (e.g., the master in Google GFS) become a performance bottleneck, as they are unable to accommodate a large number of file accesses due to clients and MapReduce applications.

DISADVANTAGES OF EXISTING SYSTEM:

The most existing solutions are designed without considering both movement cost and node heterogeneity and may introduce significant maintenance network traffic to the DHTs.

PROPOSED SYSTEM:

·        In this paper, we are interested in studying the load rebalancing problem in distributed file systems specialized for large-scale, dynamic and data-intensive clouds. (The terms “rebalance” and “balance” are interchangeable in this paper.) Such a large-scale cloud has hundreds or thousands of nodes (and may reach tens of thousands in the future).

·        Our objective is to allocate the chunks of files as uniformly as possible among the nodes such that no node manages an excessive number of chunks. Additionally, we aim to reduce network traffic (or movement cost) caused by rebalancing the loads of nodes as much as possible to maximize the network bandwidth available to normal applications. Moreover, as failure is the norm, nodes are newly added to sustain the overall system performance,resulting in the heterogeneity of nodes. Exploiting capable nodes to improve the system performance is, thus, demanded.

·        Our proposal not only takes advantage of physical network locality in the reallocation of file chunks to reduce the movement cost but also exploits capable nodes to improve the overall system performance.

ADVANTAGES OF PROPOSED SYSTEM:

üThis eliminates the dependence on central nodes.

üOur proposed algorithm operates in a distributed manner in which nodes perform their load-balancing tasks independently without synchronization or global knowledge regarding the system.

üAlgorithm reduces algorithmic overhead introduced to the DHTs as much as possible.

ALGORITHM USED:

üLoad Rebalancing Algorithm

SYSTEM CONFIGURATION:-

HARDWARE CONFIGURATION:-

 

üProcessor                     Pentium –IV

üSpeed                                     1.1 Ghz

üRAM                            256 MB(min)

üHard Disk                    20 GB

üKey Board                    Standard Windows Keyboard

üMouse                          Two or Three Button Mouse

üMonitor                        SVGA

 

SOFTWARE CONFIGURATION:-

 

üOperating System                    : Windows XP

üProgramming Language           : JAVA

üJava Version                           : JDK 1.6 & above.

 

REFERENCE:

Hung-Chang Hsiao, Member, IEEE Computer Society, Hsueh-Yi Chung, Haiying Shen, Member, IEEE, and Yu-Chang Chao, “Load Rebalancing for Distributed File Systems in Clouds”, IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 24, NO. 5, MAY 2013.

Fast Transmission to Remote Cooperative Groups: A New Key Management Paradigm

Fast Transmission to Remote Cooperative Groups: A New Key Management Paradigm

ABSTRACT:

The problem of efficiently and securely broadcasting to a remote cooperative group occurs in many newly emerging networks. A major challenge in devising such systems is to overcome the obstacles of the potentially limited communication from the group to the sender, the unavailability of a fully trusted key generation center, and the dynamics of the sender. The existing key management paradigms cannot deal with these challenges effectively. In this paper, we circumvent these obstacles and close this gap by proposing a novel key management paradigm. The new paradigm is a hybrid of traditional broadcast encryption and group key agreement. In such a system, each member maintains a single public/secret key pair. Upon seeing the public keys of the members, a remote sender can securely broadcast to any intended subgroup chosen in an ad hoc way. Following this model, we instantiate a scheme that is proven secure in the standard model. Even if all the non-intended members collude, they cannot extract any useful information from the transmitted messages. After the public group encryption key is extracted, both the computation overhead and the communication cost are independent of the group size. Furthermore, our scheme facilitates simple yet efficient member deletion/ addition and flexible rekeying strategies. Its strong security against collusion, its constant overhead, and its implementation friendliness without relying on a fully trusted authority render our protocol a very promising solution to many applications.

 

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

WMNs have been recently suggested as a promising low-cost approach to provide last-mile high-speed Internet access. A typical WMN is a multihop hierarchical wireless network. The top layer consists of high-speed wired Internet entry points. The second layer is made up of stationary mesh routers serving as a multihop backbone to connect to each other and Internet via long-range high-speed wireless techniques. The bottom layer includes a large number of mobile network users. The end-users access the network either by a direct wireless link or through a chain of other peer users leading to a nearby mesh router; the router further connects to remote users through the wireless backbone and Internet. Security and privacy issues are of utmost concern in pushing the success of WMNs for their wide deployment and for supporting service-oriented applications. For instance, a manager on his way to holiday may want to send a confidential e-mail to some staff of her company via WMNs, so that the intended staff members can read the e-mail with their mobile devices (laptops, PDAs, smart phones, etc.). Due to the intrinsically open and distributed nature of WMNs, it is essential to enforce access control of sensitive information to cope with both eavesdroppers and malicious attackers.

DISADVANTAGES OF EXISTING SYSTEM:

A major challenge in devising such systems is to overcome the obstacles of the potentially limited communication from the group to the sender, the unavailability of a fully trusted key generation center, and the dynamics of the sender. The existing key management paradigms cannot deal with these challenges effectively.

 

PROPOSED SYSTEM:

Our contribution includes three aspects. First, we formalize the problem of secure transmission to remote cooperative groups, in which the core is to establish a one-to-many channel securely and efficiently under certain constraints.

Second, we propose a new key management paradigm allowing secure and efficient transmissions to remote cooperative groups by effectively exploiting the mitigating features and circumventing the constraints discussed above. The new approach is a hybrid of group key agreement and public-key broadcast encryption.

Third, we present a provably secure protocol in the new key management paradigm and perform extensive experiments in the context of mobile ad hoc networks. In the proposed protocol after extraction of the public group encryption key in the first run, the subsequent encryption by the sender and the decryption by each receiver are both of constant complexity, even in the case of member changes or system updates for rekeying.

 

ADVANTAGES OF PROPOSED SYSTEM:

The common problem is to enable a sender to securely transmit messages to a remote cooperative group. A solution to this problem must meet several constraints.

·        First, the sender is remote and can be dynamic.

·        Second, the transmission may cross various networks including open insecure networks before reaching the intended recipients.

·        Third, the communication from the group members to the sender may be limited. Also, the sender may wish to choose only a subset of the group as the intended recipients.

·        Furthermore, it is hard to resort to a fully trusted third party to secure the communication. In contrast to the above constraints, mitigating features are that the group members are cooperative and the communication among them is local and efficient.

SYSTEM CONFIGURATION:-

HARDWARE CONFIGURATION:-

 

üProcessor                     Pentium –IV

üSpeed                                     1.1 Ghz

üRAM                            256 MB(min)

üHard Disk                    20 GB

üKey Board                    Standard Windows Keyboard

üMouse                          Two or Three Button Mouse

üMonitor                        SVGA

 

SOFTWARE CONFIGURATION:-

 

üOperating System                    : Windows XP

üProgramming Language           : JAVA

üJava Version                           : JDK 1.6 & above.

 

REFERENCE:

Qianhong Wu, Member, IEEE, Bo Qin, Lei Zhang, Josep Domingo-Ferrer, Fellow, IEEE, and Jesús A. Manjón “Fast Transmission to Remote Cooperative Groups: A New Key Management Paradigm”- IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 21, NO. 2, APRIL 2013.

Back-Pressure-Based Packet-by-Packet Adaptive Routing in Communication Networks

Back-Pressure-Based Packet-by-Packet Adaptive Routing in Communication Networks

 

ABSTRACT:

Back-pressure-based adaptive routing algorithms where each packet is routed along a possibly different path have been extensively studied in the literature. However, such algorithms typically result in poor delay performance and involve high implementation complexity. In this paper, we develop a new adaptive routing algorithm built upon the widely studied back-pressure algorithm. We decouple the routing and scheduling components of the algorithm by designing a probabilistic routing table that is used to route packets to per-destination queues. The scheduling decisions in the case of wireless networks are made using counters called shadow queues. The results are also extended to the case of networks that employ simple forms of network coding. In that case, our algorithm provides a low-complexity solution to optimally exploit the routing–coding tradeoff.

 

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

The back-pressure algorithm introduced has been widely studied in the literature. While the ideas behind scheduling using the weights suggested in that paper have been successful in practice in base stations and routers, the adaptive routing algorithm is rarely used. The main reason for this is that the routing algorithm can lead to poor delay performance due to routing loops. Additionally, the implementation of the back-pressure algorithm requires each node to maintain per-destination queues that can be burdensome for a wire line or wireless router.

DISADVANTAGES OF EXISTING SYSTEM:

In an existing algorithms typically result in poor delay performance and involve high implementation complexity.

PROPOSED SYSTEM:

The main purpose of this paper is to study if the shadow queue approach extends to the case of scheduling and routing. The first contribution is to come up with a formulation where the number of hops is minimized. It is interesting to contrast this contribution. The formulation has the same objective as ours, but their solution involves per-hop queues, which dramatically increases the number of queues, even compared to the back-pressure algorithm. Our solution is significantly different: We use the same number of shadow queues as the back-pressure algorithm, but the number of real queues is very small (per neighbor). The new idea here is to perform routing via probabilistic splitting, which allows the dramatic reduction in the number of real queues. Finally, an important observation in this paper, not found is that the partial ”decoupling” of shadow back-pressure and real packet transmission allows us to activate more links than a regular back-pressure algorithm would. This idea appears to be essential to reduce delays in the routing case, as shown in the simulations.

ADVANTAGES OF PROPOSED SYSTEM:

Our adaptive routing algorithm can be modified to automatically realize this tradeoff with good delay performance.

The routing algorithm is designed to minimize the average number of hops used by packets in the network. This idea, along with the scheduling/routing decoupling, leads to delay reduction compared with the traditional back-pressure algorithm.

REFERENCE:

Eleftheria Athanasopoulou, Member, IEEE,LocX.Bui, Associate Member, IEEE, Tianxiong Ji, Member, IEEE, R. Srikant, Fellow, IEEE, and Alexander Stolyar “Back-Pressure-Based Packet-by-Packet Adaptive Routing in Communication Networks”- IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 21, NO. 1, FEBRUARY 2013.