Security Analysis of a Single Sign-On Mechanism For Distributed Computer Networks

Security Analysis of a Single Sign-On Mechanism For Distributed Computer Networks

  ABSTRACT:

 The Single sign-on (SSO) is a new authentication mechanism that enables a legal user with a single credential to be authenticated by multiple service providers in a distributed computer network. Recently, Chang and Lee proposed a new SSO scheme and claimed its security by providing well-organized security arguments. In this paper, however, we demonstrative that their scheme is actually insecure as it fails to meet credential privacy and soundness of authentication. Specifically, we present two impersonation attacks. The first attack allows a malicious service provider, who has successfully communicated with a legal user twice, to recover the user’s credential and then to impersonate the user to access resources and services offered by other service providers. In another attack, an outsider without any credential may be able to enjoy network services freely by impersonating any legal user or a nonexistent user. We identify the flaws in their security arguments to explain why attacks are possible against their SSO scheme. Our attacks also apply to another SSO scheme proposed by Hsu and Chuang, which inspired the design of the Chang–Lee scheme. Moreover, by employing an efficient verifiable encryption of RSA signatures proposed by Ateniese, we propose an improvement for repairing the Chang–Lee scheme. We promote the formal study of the soundness of authentication as one open problem.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

 EXISTING SYSTEM:

The other side, it is usually not practical by asking one user to maintain distinct pairs of identity and password for different service providers, since this could increase the workload of both users and service providers as well as the communication overhead of networks. That, after obtaining a credential from a trusted authority for a short period each legal user’s authentication agent can use this single credential to complete authentication on behalf of the user and then access multiple service providers. Intuitively, an SSO scheme should meet at least three basic security requirements, enforceability, credential privacy, and soundness. Enforceability demands that, except the trusted authority, even a collusion of users and service providers are not able to forge a valid credential for a new user. Credential privacy guarantees that colluded dishonest service providers should not be able to fully recover a user’s credential and then impersonate the user to log in to other service providers. Soundness means that an unregistered user without a credential should not be able to access the services offered by service providers.

DISADVANTAGES OF EXISTING SYSTEM:

  • Actually an SSO scheme, has two weaknesses an outsider can forge a valid credential by mounting a credential forging attack since the scheme employed naïve RSA signature without using any hash function to issue a credential for any random identity.
  • Their scheme is suitable for mobile devices due to its high efficiency in computation and communication.

PROPOSED SYSTEM

 

The first attack, the “credential recovering attack” compromises the credential privacy in the scheme as a malicious service provider is able to recover the credential of a legal user. The other attack, an “impersonation attack without credentials,” demonstrates how an outside attacker may be able to freely make use of resources and services offered by service providers, since the attacker can successfully impersonate a legal user without holding a valid credential and thus violate the requirement of soundness for an SSO scheme. In real life, these attacks may put both users and service providers at high risk In fact; this is a traditional as well as prudential way to deal with trustworthiness, since we cannot simply assume that beside the trusted authority, all service providers are also trusted. The basic reason is that assuming the existence of a trusted party is the strongest supposition in cryptography but it is usually very costly to develop and maintain. In particular defined collusion impersonation attacks as a way to capture the scenarios in which malicious service providers may recover a user’s credential and then impersonate the user to login to other service providers. It is easy to see that the above credential recovery attack is simply a special case of collusion impersonation attack where a single malicious service provider can recover a user’s credential. It must be emphasized that impersonation attacks without valid credentials seriously violate the security of SSO schemes as it allows attackers to be successfully authenticated without first obtaining a valid credential from the trusted authority after registration.

ADVANTAGES OF PROPOSED SYSTEM:

 

  • The authors claimed to be able to: “prove that and are able to authenticate each other using our protocol.” but they provided no argument to show why each party could not be impersonated by an attacker. Second, the authors did discuss informally why their scheme could withstand impersonation attacks.
  • The authors did not give details to show how the BAN logic can be used to prove that their scheme guarantees mutual authentication.
  • In other words, it means that in an SSO scheme suffering these attacks there are alternatives which enable passing through authentication without credentials.

MODULES:

ü User Identification Phase

ü Attacks against the Chang–Lee Scheme

ü Recovering Attack

ü Non-interactive zero-knowledge(NZK)

ü Security Analysis

MODULE DESCRIPTION:

 

User Identification Phase

To access the resources of service provider, user needs to go through the authentication protocol specified. Here, and are random integers chosen by and, respectively; and are three random nonces; and denotes a symmetric key encryption scheme which is used to protect the confidentiality of user’s identity.

Attacks against the Chang–Lee Scheme

 

The Chang–Lee scheme is actually not a secure SSO scheme because there are two potential effective and concrete impersonation attacks. The first attack, the “credential recovering attack” compromises the credential privacy in the Chang–Lee scheme as a malicious service provider is able to recover the credential of a legal user. The other attack, an “impersonation attack without credentials,” demonstrates how an outside attacker may be able to freely make use of resources and services offered by service providers, since the attacker can successfully impersonate a legal user without holding a valid credential and thus violate the requirement of soundness for an SSO scheme. In real life, these attacks may put both users and service providers at high risk.

Recovering Attack

The malicious and then mount the above attack.  On the one hand, the Chang–Lee SSO scheme specifies that is the trusted party. So, this implies that service providers are not trusted parties and that they could be malicious. By agreeing with, when they said that “the Wu–Hsu’s modified version cold not protect the user’s token against a malicious service provider, the work also implicitly agrees that there is the potential for attacks from malicious service providers against SSO schemes. Moreover, if all service providers are assumed to be trusted, to identify him/her user can simply encrypt his/her credential under the RSA public key of service provider. Then, can easily decrypt this cipher text to get ’s credential and verify its validity by checking if it is a correct signature issued by . In fact, such a straightforward scheme with strong assumption is much simpler, more efficient and has better security, at least against this type of attack.

Non-interactive zero-knowledge (NZK)

The basic idea of VES is that Alice who has a key pair of signature scheme signs a given message and encrypts the resulting signature under the trusted party’s public key, and uses a non-interactive zero-knowledge (NZK) proof  to convince Bob that she has signed the message and the trusted party can recover the signature from the cipher text. After validating the proof, Bob can send his signature for the same message to Alice. For the purpose of fair exchange, Alice should send her signature in plaintext back to Bob after accepting Bob’s signature.

Security Analysis

 

The security of the improved SSO scheme by focusing on the security of the user authentication part, especially soundness and credential privacy due to two reasons. On the one hand, the unforgeability of the credential is guaranteed by the unforgeability of RSA signatures, and the security of service provider authentication is ensured by the unforgeability of the secure signature scheme chosen by each service provider.

 SYSTEM CONFIGURATION:-

HARDWARE CONFIGURATION:-

ü Processor             -Pentium –III

ü Speed                             –    1.1 Ghz

ü RAM                    –    256 MB(min)

ü Hard Disk            –   20 GB

ü Floppy Drive       –    1.44 MB

ü Key Board            –    Standard Windows Keyboard

ü Mouse                  –    Two or Three Button Mouse

ü Monitor                –    SVGA

 

SOFTWARE CONFIGURATION:-

v   Operating System                    : Windows XP /7

v   Programming Language           : JAVA

v   Java Version                           : JDK 1.6 & above.

REFERENCE:

Guilin Wang, Jiangshan Yu, and Qi Xie, “Security Analysis of a Single Sign-On Mechanism for Distributed Computer Networks”, IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. 9, NO. 1, FEBRUARY 2013.

[youtube]https://www.youtube.com/watch?v=N7ARcMaK91o[/youtube]

Privacy Preserving Data Sharing With Anonymous ID Assignment

Privacy Preserving Data Sharing With Anonymous ID Assignment

ABSTRACT:

An algorithm for anonymous sharing of private data among parties is developed. This technique is used iteratively to assign these nodes ID numbers ranging from 1 to N. This assignment is anonymous in that the identities received are unknown to the other members of the group. Resistance to collusion among other members is verified in an information theoretic sense when private communication channels are used. This assignment of serial numbers allows more complex data to be shared and has applications to other problems in privacy preserving data mining, collision avoidance in communications and distributed database access. The required computations are distributed without using a trusted central authority. Existing and new algorithms for assigning anonymous IDs are examined with respect to trade-offs between communication and computational requirements. The new algorithms are built on top of a secure sum data mining operation using Newton’s identities and Sturm’s theorem. An algorithm for distributed solution of certain polynomials over finite fields enhances the scalability of the algorithms. Markov chain representations are used to find statistics on the number of iterations required, and computer algebra gives closed form results for the completion rates.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

A secure computation function widely used in the literature is secure sum that allows parties to compute the sum of their individual inputs without disclosing the inputs to one another. This function is popular in data mining applications and also helps characterize the complexities of the secure multiparty computation.

DISADVANTAGES OF EXISTING SYSTEM:

The algorithms for mental poker are more complex and utilize cryptographic methods as players must, in general, be able to prove that they held the winning hand. Throughout this paper, we assume that the participants are semi-honest, also known as passive or honest-but-curious, and execute their required protocols faithfully. Given a semi-honest, reliable, and trusted third party, a permutation can also be created using an anonymous routing protocol.

PROPOSED SYSTEM:

This work deals with efficient algorithms for assigning identifiers (IDs) to the nodes of a network in such a way that the IDs are anonymous using a distributed computation with no central authority. Given N nodes, this assignment is essentially a permutation of the integers {1,…..N} with each ID being known only to the node to which it is assigned. Our main algorithm is based on a method for anonymously sharing simple data and results in methods for efficient sharing of complex data.

Despite the differences cited, the reader should consult and consider the alternative algorithms mentioned above before implementing the algorithms in this paper. This paper builds an algorithm for sharing simple integer data on top of secure sum. The sharing algorithm will be used at each iteration of the algorithm for anonymous ID assignment (AIDA). This AIDA algorithm, and the variants that we discuss, can require a variable and unbounded number of iterations.

The work reported in this paper further explores the connection between sharing secrets in an anonymous manner, distributed secure multiparty computation and anonymous ID assignment. The use of the term “anonymous” here differs from its meaning in research dealing with symmetry breaking and leader election in anonymous networks. Our network is not anonymous and the participants are identifiable in that they are known to and can be addressed by the others. Methods for assigning and using sets of pseudonyms have been developed for anonymous communication in mobile networks. The methods developed in these works generally require a trusted administrator, as written, and their end products generally differ from ours in form and/or in statistical properties.

ADVANTAGES OF PROPOSED SYSTEM:

Increasing a parameter in the algorithm will reduce the number of expected rounds. However, our central algorithm requires solving a polynomial with coefficients taken from a finite field of integers modulo a prime. That task restricts the level to which can be practically raised. We show in detail how to obtain the average number of required rounds, and in the Appendix detail a method for solving the polynomial, which can be distributed among the participants.

ALGORITHMS USED:

SYSTEM CONFIGURATION:-

HARDWARE CONFIGURATION:-

ü Processor             –        Pentium –IV

ü Speed                             –        1.1 Ghz

ü RAM                    –        256 MB(min)

ü Hard Disk            –        20 GB

ü Key Board            –        Standard Windows Keyboard

ü Mouse                  –        Two or Three Button Mouse

ü Monitor                –        SVGA

 

SOFTWARE CONFIGURATION:-

ü Operating System                    : Windows XP

ü Programming Language           : JAVA

ü Java Version                           : JDK 1.6 & above.

REFERENCE:

Larry A. Dunning, Member, IEEE, and Ray Kresman-“Privacy Preserving Data Sharing With Anonymous ID Assignment”-IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, VOL. 8, NO. 2, FEBRUARY 2013

NICE: Network Intrusion Detection and Countermeasure Selection in Virtual Network Systems

NICE: Network Intrusion Detection and Countermeasure Selection in Virtual Network Systems

NICE: Network Intrusion Detection and Countermeasure Selection in Virtual Network Systems

ABSTRACT:

Cloud security is one of most important issues that have attracted a lot of research and development effort in past few years. Particularly, attackers can explore vulnerabilities of a cloud system and compromise virtual machines to deploy further large-scale Distributed Denial-of-Service (DDoS). DDoS attacks usually involve early stage actions such as multi-step exploitation, low frequency vulnerability scanning, and compromising identified vulnerable virtual machines as zombies, and finally DDoS attacks through the compromised zombies. Within the cloud system, especially the Infrastructure-as-a-Service (IaaS) clouds, the detection of zombie exploration attacks is extremely difficult. This is because cloud users may install vulnerable applications on their virtual machines. To prevent vulnerable virtual machines from being compromised in the cloud, we propose a multi-phase distributed vulnerability detection, measurement, and countermeasure selection mechanism called NICE, which is built on attack graph based analytical models and reconfigurable virtual network-based countermeasures. The proposed framework leverages Open Flow network programming APIs to build a monitor and control plane over distributed programmable virtual switches in order to significantly improve attack detection and mitigate attack consequences. The system and security evaluations demonstrate the efficiency and effectiveness of the proposed solution.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):


AIM

            The main aim of this project is to prevent the vulnerable virtual machines from being compromised in the cloud server using multi-phase distributed vulnerability detection, measurement, and countermeasure selection mechanism called NICE.

 

SYNOPSIS

          In recent studies have shown that users migrating to the cloud consider security as the most important factor. A recent Cloud Security Alliance (CSA) survey shows that among all security issues, abuse and nefarious use of cloud computing is considered as the top security threat, in which attackers can exploit vulnerabilities in clouds and utilize cloud system resources to deploy attacks. In traditional data centers, where system administrators have full control over the host machines, vulnerabilities can be detected and patched by the system administrator in a centralized manner. However, patching known security holes in cloud data centers, where cloud users usually have the privilege to control software installed on their managed VMs, may not work effectively and can violate the Service Level Agreement (SLA). Furthermore, cloud users can install vulnerable software on their VMs, which essentially contributes to loopholes in cloud security. The challenge is to establish an effective vulnerability/attack detection and response system for accurately identifying attacks and minimizing the impact of security breach to cloud users.

 In a cloud system where the infra-structure is shared by potentially millions of users, abuse and nefarious use of the shared infrastructure benefits attackers to exploit vulnerabilities of the cloud and use its resource to deploy attacks in more efficient ways. Such attacks are more effective in the cloud environment since cloud users usually share computing resources, e.g., being connected through the same switch, sharing with the same data storage and file systems, even with potential attackers.

 

EXISTING SYSTEM:

 

Cloud users can install vulnerable software on their VMs, which essentially contributes to loopholes in cloud security. The challenge is to establish an effective vulnerability/attack detection and response system for accurately identifying attacks and minimizing the impact of security breach to cloud users. In a cloud system where the infrastructure is shared by potentially millions of users, abuse and nefarious use of the shared infrastructure benefits attackers to exploit vulnerabilities of the cloud and use its resource to deploy attacks in more efficient ways. Such attacks are more effective in the cloud environment since cloud users usually share computing resources, e.g., being connected through the same switch, sharing with the same data storage and file systems, even with potential attackers. The similar setup for VMs in the cloud, e.g., virtualization techniques, VM OS, installed vulnerable software, networking, etc., attracts attackers to compromise multiple VMs.

 

 

DISADVANTAGES OF EXISTING SYSTEM:

 

1.     No detection and prevention framework in a virtual networking environment.

2.     Not accuracy in the attack detection from attackers.

 

 

PROPOSED SYSTEM:

 

In this article, we propose NICE (Network Intrusion detection and Countermeasure selection in virtual network systems) to establish a defense-in-depth intrusion detection framework. For better attack detection, NICE incorporates attack graph analytical procedures into the intrusion detection processes. We must note that the design of NICE does not intend to improve any of the existing intrusion detection algorithms; indeed, NICE employs a reconfigurable virtual networking approach to detect and counter the attempts to compromise VMs, thus preventing zombie VMs.

 

ADVANTAGES OF PROPOSED SYSTEM:

The contributions of NICE are presented as follows:

 

ØWe devise NICE, a new multi-phase distributed network intrusion detection and prevention framework in a virtual networking environment that captures and inspects suspicious cloud traffic without interrupting users’ applications and cloud services.

Ø NICE incorporates a software switching solution to quarantine and inspect suspicious VMs for further investigation and protection. Through programmable network approaches, NICE can improve the attack detection probability and improve the resiliency to VM exploitation attack without interrupting existing normal cloud services.

Ø NICE employs a novel attack graph approach for attack detection and prevention by correlating attack behavior and also suggests effective countermeasures.

ØNICE optimizes the implementation on cloud servers to minimize resource consumption. Our study shows that NICE consumes less computational overhead compared to proxy-based network intrusion detection solutions.

 

SYSTEM ARCHITECTURE:

NICE: Network Intrusion Detection and Countermeasure Selection in Virtual Network Systems

 ALGORITHM USED:

 

üAlert Correlation Algorithm

üCountermeasure Selection Algorithm

 

MODULES:

  • Nice-A
  •  VM Profiling
  • Attack Analyzer
  • Network Controller


MODULES DESCRIPTION:

 

Nice-A

          The NICE-A is a Network-based Intrusion Detection System (NIDS) agent installed in each cloud server. It scans the traffic going through the bridges that control all the traffic among VMs and in/out from the physical cloud servers. It will sniff a mirroring port on each virtual bridge in the Open vSwitch. Each bridge forms an isolated subnet in the virtual network and connects to all related VMs. The traffic generated from the VMs on the mirrored software bridge will be mirrored to a specific port on a specific bridge using SPAN, RSPAN, or ERSPAN methods. It’s more efficient to scan the traffic in cloud server since all traffic in the cloud server needs go through it; however our design is independent to the installed VM. The false alarm rate could be reduced through our architecture design.

 

VM Profiling

 

          Virtual machines in the cloud can be profiled to get precise information about their state, services running, open ports, etc. One major factor that counts towards a VM profile is its connectivity with other VMs. Also required is the knowledge of services running on a VM so as to verify the authenticity of alerts pertaining to that VM. An attacker can use port scanning program to perform an intense examination of the network to look for open ports on any VM. So information about any open ports on a VM and the history of opened ports plays a significant role in determining how vulnerable the VM is. All these factors combined will form the VM profile. VM profiles are maintained in a database and contain comprehensive information about vulnerabilities, alert and traffic.

Attack Analyzer

 

          The major functions of NICE system are performed by attack analyzer, which includes procedures such as attack graph construction and update, alert correlation and countermeasure selection. The process of constructing and utilizing the Scenario Attack Graph (SAG) consists of three phases: information gathering, attack graph construction, and potential exploit path analysis. With this information, attack paths can be modeled using SAG. The Attack Analyzer also handles alert correlation and analysis operations. This component has two major functions: (1) constructs Alert Correlation Graph (ACG), (2) provides threat information and appropriate countermeasures to network controller for virtual network reconfiguration. NICE attack graph is constructed based on the following information: Cloud system information, Virtual network topology and configuration information, Vulnerability information

 

Network Controller

 

          The network controller is a key component to support the programmable networking capability to realize the virtual network reconfiguration. In NICE, we integrated the control functions for both OVS and OFS into the network controller that allows the cloud system to set security/filtering rules in an integrated and comprehensive manner. The network controller is responsible for collecting network information of current Open Flow network and provides input to the attack analyzer to construct attack graphs. In NICE, the network control also consults with the attack analyzer for the flow access control by setting up the filtering rules on the corresponding OVS and OFS. Network controller is also responsible for applying the countermeasure from attack analyzer. Based on VM Security Index and severity of an alert, countermeasures are selected by NICE and executed by the network controller.

 

 

 

SYSTEM CONFIGURATION:-

HARDWARE CONFIGURATION:-

 

üProcessor                     Pentium –IV

üSpeed                                     1.1 Ghz

üRAM                            256 MB(min)

üHard Disk                    20 GB

üKey Board                    Standard Windows Keyboard

üMouse                          Two or Three Button Mouse

üMonitor                        SVGA

 

SOFTWARE CONFIGURATION:-

 

üOperating System                    : Windows XP

üProgramming Language           : JAVA/J2EE

üJava Version                           : JDK 1.6 & above.

 

REFERENCE:

Chun-Jen Chung, Student Member, IEEE, Pankaj Khatkar, Student Member, IEEE, Tianyi Xing, Jeongkeun Lee, Member, IEEE, and Dijiang Huang Senior Member, IEEE-“ NICE: Network Intrusion Detection and Countermeasure Selection in Virtual Network Systems”- IEEE TRANSACTIONS ON DEPEDABLE AND SECURE COMPUTING, 2013.

Modeling the Pairwise Key Predistribution Scheme in the Presence of Unreliable Links

Modeling the Pairwise Key Predistribution Scheme in the Presence of Unreliable Links

ABSTRACT:

We investigate the secure connectivity of wireless sensor networks under the random pairwise key predistribution scheme of Chan, Perrig, and Song. Unlike recent work carried out under the assumption of full visibility, here we assume a (simplified) communication model where unreliable wireless links are represented as independent on/off channels.We present conditions on how to scale the model parameters so that the network 1) has no secure node that is isolated and 2) is securely connected, both with high probability, when the number of sensor nodes becomes large. The results are given in the form of zero-one laws, and exhibit significant differences with corresponding results in the full-visibility case. Through simulations, these zero-one laws are shown to also hold under a more realistic communication model, namely the disk model.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

Many security schemes developed for general network environments do not take into account the unique features of WSNs: Public key cryptography is not feasible computationally because of the severe limitations imposed on the physical memory and power consumption of the individual sensors. Traditional key exchange and distribution protocols are based on trusting third parties, and this makes them inadequate for large-scale WSNs whose topologies are unknown prior to deployment. Random key predistribution schemes were introduced to address some of these difficulties. The idea of randomly assigning secure keys to sensor nodes prior to network deployment was first introduced by Eschenauer and Gligor. The approach we use here considers random graph models naturally induced by a given scheme, and then develops models naturally induced by a given scheme, and then develops the scaling laws corresponding to desirable network properties, e.g., absence of secure nodes that are isolated, secure connectivity, etc. This is done with the aim of deriving guidelines to dimension the scheme, namely adjust its parameters so that these properties occur.

DISADVANTAGES OF EXISTING SYSTEM:

To be sure, the full-visibility assumption does away with the wireless nature of the communication medium supporting WSNs. In return, this simplification makes it possible to focus on how randomization in the key assignments alone affects the establishment of a secure network in the best of circumstances, i.e., when there are no link failures. A common criticism of this line of work is that by disregarding the unreliability of the wireless links, the resulting dimensioning guidelines are likely to be too optimistic: In practice, nodes will have fewer neighbors since some of the communication links may be impaired. As a result, the desired connectivity properties may not be achieved if dimensioning is done according to results derived under full visibility.

PROPOSED SYSTEM:

In this paper, in an attempt to go beyond full visibility, we revisit the pairwise key predistribution scheme of Chan et al. under more realistic assumptions that account for the possibility that communication links between nodes may not be available. This could occur due to the presence of physical barriers between nodes or because of harsh environmental conditions severely impairing transmission. To study such situations, we introduce a simple communication model where channels are mutually independent, and are either on or off. An overall system model is then constructed by intersecting the random graph model of the pairwise key distribution scheme (under full visibility).

ADVANTAGES OF PROPOSED SYSTEM:

SYSTEM CONFIGURATION:-

HARDWARE CONFIGURATION:-

ü Processor             –        Pentium –IV

ü Speed                             –        1.1 Ghz

ü RAM                    –        256 MB(min)

ü Hard Disk            –        20 GB

ü Key Board            –        Standard Windows Keyboard

ü Mouse                  –        Two or Three Button Mouse

ü Monitor                –        SVGA

 

SOFTWARE CONFIGURATION:-

ü Operating System                    : Windows XP

ü Programming Language           : JAVA

ü Java Version                           : JDK 1.6 & above.

 

REFERENCE:

 Osman Yağan, Member, IEEE, and Armand M. Makowski, Fellow, IEEE “Modeling the Pairwise Key Predistribution Scheme in the Presence of Unreliable Links”-IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 59, NO. 3, MARCH 2013.

Identity-Based Secure Distributed Data Storage Schemes

Identity-Based Secure Distributed Data Storage Schemes

ABSTRACT:

Secure distributed data storage can shift the burden of maintaining a large number of files from the owner to proxy servers. Proxy servers can convert encrypted files for the owner to encrypted files for the receiver without the necessity of knowing the content of the original files. In practice, the original files will be removed by the owner for the sake of space efficiency. Hence, the issues on confidentiality and integrity of the outsourced data must be addressed carefully. In this paper, we propose two identity-based secure distributed data storage (IBSDDS) schemes. Our schemes can capture the following properties: (1) The file owner can decide the access permission independently without the help of the private key generator (PKG); (2) For one query, a receiver can only access one file, instead of all files of the owner; (3) Our schemes are secure against the collusion attacks, namely even if the receiver can compromise the proxy servers, he cannot obtain the owner’s secret key. Although the first scheme is only secure against the chosen plaintext attacks (CPA), the second scheme is secure against the chosen cipher text attacks (CCA). To the best of our knowledge, it is the first IBSDDS schemes where an access permissions is made by the owner for an exact file and collusion attacks can be protected in the standard model.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

Cloud computing provides users with a convenient mechanism to manage their personal files with the notion called database-as-a-service (DAS). In DAS schemes, a user can outsource his encrypted files to untrusted proxy servers. Proxy servers can perform some functions on the outsourced ciphertexts without knowing anything about the original files. Unfortunately, this technique has not been employed extensively. The main reason lies in that users are especially concerned on the confidentiality, integrity and query of the outsourced files as cloud computing is a lot more complicated than the local data storage systems, as the cloud is managed by an untrusted third party. After outsorcing the files to proxy servers, the user will remove them from his local machine. Therefore, how to guarantee the outsoured files are not accessed by the unauthorized users and not modified by proxy servers is an important problem that has been considered in the data storage research community. Furthermore, how to guarantee that an authorized user can query the outsourced files from proxy servers is another concern as the proxy server only maintains the outsourced ciphertexts. Consequently, research around these topics grows significantly.

DISADVANTAGES OF EXISTING SYSTEM:

  • Users are especially concerned on the confidentiality, integrity and query of the outsourced files as cloud computing is a lot more complicated than the local data storage systems, as the cloud is managed by an untrusted third party.
  • The outsoured files are not accessed by the unauthorized users and not modified by proxy servers is an important problem that has been considered in the data storage research community.

PROPOSED SYSTEM:

In this paper, we propose two identity-based secure distributed data storage (IBSDDS) schemes in standard model where, for one query, the receiver can only access one of the owner’s files, instead of all files. In other words, an access permission (re-encryption key) is bound not only to the identity of the receiver but also the file. The access permission can be decided by the owner, instead of the trusted party (PKG). Furthermore, our schemes are secure against the collusion attacks.

ADVANTAGES OF PROPOSED SYSTEM:

  • Ø It has two schemes of security,the first scheme is CPA secure, the second scheme achieves CCA security.
  • Ø To the best of our knowledge, it is the first IBSDDS schemes where an access permission is made by the owner for an exact file and collusion attacks can be protected in the standard model.
  • Ø To achieve a stronger security and implement filebased access control, the owner must be online to authenticate requesters and also to generate access permissions for them. Therefore, the owner in our schemes needs do more computations than that in PRE schemes. Although PRE schemes can provide the similar functionalities of our schemes when the owner only has one file, these are not flexible and practical.

SYSTEM ARCHITECTURE:

ALGORITHMS USED:

SYSTEM CONFIGURATION:-

HARDWARE CONFIGURATION:-

ü Processor             –        Pentium –IV

ü Speed                             –        1.1 Ghz

ü RAM                    –        256 MB(min)

ü Hard Disk            –        20 GB

ü Key Board            –        Standard Windows Keyboard

ü Mouse                  –        Two or Three Button Mouse

ü Monitor                –        SVGA

 

SOFTWARE CONFIGURATION:-

ü Operating System                    : Windows XP

ü Programming Language           : JAVA

ü Java Version                           : JDK 1.6 & above.

 

REFERENCE:

Jinguang Han, Student Member, IEEE, Willy Susilo, Senior Member, IEEE, and Yi Mu, Senior Member, IEEE-“Identity-Based Secure Distributed Data Storage Schemes”-IEEE TRANSACTIONS ON COMPUTERS, 2013.

EAACK—A Secure Intrusion-Detection System for MANETs

EAACK—A Secure Intrusion-Detection System for MANETs

ABSTRACT:

The migration to wireless network from wired network has been a global trend in the past few decades. The mobility and scalability brought by wireless network made it possible in many applications. Among all the contemporary wireless networks, Mobile Ad hoc NETwork (MANET) is one of the most important and unique applications. On the contrary to traditional network architecture, MANET does not require a fixed network infrastructure; every single node works as both a transmitter and a receiver. Nodes communicate directly with each other when they are both within the same communication range. Otherwise, they rely on their neighbors to relay messages. The self-configuring ability of nodes inMANETmade it popular among critical mission applications like military use or emergency recovery. However, the open medium and wide distribution of nodes make MANET vulnerable to malicious attackers. In this case, it is crucial to develop efficient intrusion-detection mechanisms to protect MANET from attacks. With the improvements of the technology and cut in hardware costs, we are witnessing a current trend of expanding MANETs into industrial applications. To adjust to such trend, we strongly believe that it is vital to address its potential security issues. In this paper, we propose and implement a new intrusion-detection system named Enhanced Adaptive ACKnowledgment (EAACK) specially designed for MANETs. Compared to contemporary approaches, EAACK demonstrates higher malicious-behavior-detection rates in certain circumstances while does not greatly affect the network performances.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

By definition, Mobile Ad hoc NETwork (MANET) is a collection of mobile nodes equipped with both a wireless transmitter and a receiver that communicate with each other via bidirectional wireless links either directly or indirectly. Unfortunately, the open medium and remote distribution of MANET make it vulnerable to various types of attacks. For example, due to the nodes’ lack of physical protection, malicious attackers can easily capture and compromise nodes to achieve attacks. In particular, considering the fact that most routing protocols in MANETs assume that every node in the network behaves cooperatively with other nodes and presumably not malicious, attackers can easily compromise MANETs by inserting malicious or noncooperative nodes into the network. Furthermore, because of MANET’s distributed architecture and changing topology, a traditional centralized monitoring technique is no longer feasible in MANETs. In such case, it is crucial to develop an intrusion-detection system (IDS) specially designed for MANETs.

DISADVANTAGES OF EXISTING SYSTEM:

Watchdog scheme fails to detect malicious misbehaviors with the presence of the following: 1) ambiguous collisions; 2) receiver collisions; 3) limited transmission power; 4) false misbehavior report; 5) collusion; and 6) partial dropping.

The TWOACK scheme successfully solves the receiver collision and limited transmission power problems posed by Watchdog. However, the acknowledgment process required in every packet transmission process added a significant amount of unwanted network overhead. Due to the limited battery power nature of MANETs, such redundant transmission process can easily degrade the life span of the entire network.

The concept of adopting a hybrid scheme in AACK greatly reduces the network overhead, but both TWOACK and AACK still suffer from the problem that they fail to detect malicious nodes with the presence of false misbehavior report and forged acknowledgment packets.

PROPOSED SYSTEM:

In fact, many of the existing IDSs in MANETs adopt an acknowledgment-based scheme, including TWOACK and AACK. The functions of such detection schemes all largely depend on the acknowledgment packets. Hence, it is crucial to guarantee that the acknowledgment packets are valid and authentic. To address this concern, we adopt a digital signature in our proposed scheme named Enhanced AACK (EAACK).

ADVANTAGES OF PROPOSED SYSTEM:

Our proposed approach EAACK is designed to tackle three of the six weaknesses of Watchdog scheme, namely, false misbehavior, limited transmission power, and receiver collision.

SYSTEM CONFIGURATION:-

HARDWARE CONFIGURATION:-

ü Processor             –        Pentium –IV

ü Speed                             –        1.1 Ghz

ü RAM                    –        256 MB(min)

ü Hard Disk            –        20 GB

ü Key Board            –        Standard Windows Keyboard

ü Mouse                  –        Two or Three Button Mouse

ü Monitor                –        SVGA

SOFTWARE CONFIGURATION:-

ü Operating System                    : Windows XP

ü Programming Language           : JAVA

ü Java Version                           : JDK 1.6 & above.

REFERENCE:

Elhadi M. Shakshuki, Senior Member, IEEE, Nan Kang, and Tarek R. Sheltami, Member, IEEE, “EAACK—A Secure Intrusion-Detection System for MANETs”, IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 60, NO. 3, MARCH 2013.

Tweet Analysis for Real-Time Event Detection and Earthquake Reporting System Development

Tweet Analysis for Real-Time Event Detection and Earthquake Reporting System Development

ABSTRACT:

Twitter has received much attention recently. An important characteristic of Twitter is its real-time nature. We investigate the real-time interaction of events such as earthquakes in Twitter and propose an algorithm to monitor tweets and to detect a target event. To detect a target event, we devise a classifier of tweets based on features such as the keywords in a tweet, the number of words, and their context. Subsequently, we produce a probabilistic spatiotemporal model for the target event that can find the center of the event location. We regard each Twitter user as a sensor and apply particle filtering, which are widely used for location estimation. The particle filter works better than other comparable methods for estimating the locations of target events. As an application, we develop an earthquake reporting system for use in Japan. Because of the numerous earthquakes and the large number of Twitter users throughout the country, we can detect an earthquake with high probability (93 percent of earthquakes of Japan Meteorological Agency (JMA) seismic intensity scale 3 or more are detected) merely by monitoring tweets. Our system detects earthquakes promptly and notification is delivered much faster than JMA broadcast announcements.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

Twitter is categorized as a microblogging service. Microblogging is a form of blogging that enables users to send brief text updates or micro media such as photographs or audio clips. Microblogging services other than Twitter include Tumblr, Plurk, Jaiku, identi.ca, and others. Users can know how other users are doing and often what they are thinking about now, users repeatedly return to the site and check to see what other people are doing

DISADVANTAGES OF EXISTING SYSTEM:

  1. Each Twitter user is regarded as a sensor and each tweet as sensory information. These virtual sensors, which we designate as social sensors, are of a huge variety and have various characteristics: some sensors are very active and others are not.
  2. A sensor might be inoperable or malfunctioning sometimes, as when a user is sleeping, or busy doing something else.
  3. Social sensors are very noisy compared to ordinary physical sensors. Regarding each Twitter user as a sensor, the event-detection problem can be reduced to one of object detection and location estimation in a ubiquitous/ pervasive computing environment in which we have numerous location sensors: a user has a mobile device or an active badge in an environment where sensors are placed.

PROPOSED SYSTEM:

 

This paper presents an investigation of the real-time nature of Twitter that is designed to ascertain whether we can extract valid information from it. We propose an event notification system that monitors tweets and delivers notification promptly using knowledge from the investigation. In this research, we take three steps: first, we crawl numerous tweets related to target events; second, we propose probabilistic models to extract events from those tweets and estimate locations of events; finally, we developed an earthquake reporting system that extracts earthquakes from Twitter and sends a message to registered users.

ADVANTAGES OF PROPOSED SYSTEM:

The advantages of this paper are summarized as follows:

ü The paper provides an example of integration of semantic analysis and real-time nature of Twitter, and presents potential uses for Twitter data.

ü For earthquake prediction and early warning, many studies have been made in the seismology field. This paper presents an innovative social approach that has not been reported before in the literature.

ARCHITECTURE:

SYSTEM CONFIGURATION:-

HARDWARE CONFIGURATION:-

ü Processor             –        Pentium –IV

ü Speed                             –        1.1 Ghz

ü RAM                    –        256 MB(min)

ü Hard Disk            –        20 GB

ü Key Board            –        Standard Windows Keyboard

ü Mouse                  –        Two or Three Button Mouse

ü Monitor                –        SVGA

 

SOFTWARE CONFIGURATION:-

ü Operating System                    : Windows XP

ü Programming Language           : JAVA

ü Java Version                           : JDK 1.6 & above.

 

REFERENCE:

Takeshi Sakaki, Makoto Okazaki, and Yutaka Matsuo Tweet Analysis for Real-Time Event Detection and Earthquake Reporting System Development, IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. 25, NO. 4, APRIL 2013

Protecting Sensitive Labels in Social Network Data Anonymization

Protecting Sensitive Labels in Social Network Data Anonymization

ABSTRACT:

Privacy is one of the major concerns when publishing or sharing social network data for social science research and business analysis. Recently, researchers have developed privacy models similar to k-anonymity to prevent node reidentification through structure information. However, even when these privacy models are enforced, an attacker may still be able to infer one’s private information if a group of nodes largely share the same sensitive labels (i.e., attributes). In other words, the label-node relationship is not well protected by pure structure anonymization methods. Furthermore, existing approaches, which rely on edge editing or node clustering, may significantly alter key graph properties. In this paper, we define a k-degree-l-diversity anonymity model that considers the protection of structural information as well as sensitive labels of individuals. We further propose a novel anonymization methodology based on adding noise nodes. We develop a new algorithm by adding noise nodes into the original graph with the consideration of introducing the least distortion to graph properties. Most importantly, we provide a rigorous analysis of the theoretical bounds on the number of noise nodes added and their impacts on an important graph property. We conduct extensive experiments to evaluate the effectiveness of the proposed technique.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

Recently, much work has been done on anonymizing tabular microdata. A variety of privacy models as well as anonymization algorithms have been developed (e.g., kanonymity, l-diversity, t-closeness. In tabular microdata, some of the nonsensitive attributes, called quasi identifiers, can be used to reidentify individuals and their sensitive attributes. When publishing social network data,graph structures are also published with corresponding social relationships. As a result, it may be exploited as a new means to compromise privacy.

DISADVANTAGES OF EXISTING SYSTEM:

  • The edge-editing method sometimes may change the distance properties substantially by connecting two faraway nodes together or deleting the bridge link between two communities.
  • Mining over these data might get the wrong conclusion about how the salaries are distributed in the society. Therefore, solely relying on edge editing may not be a good solution to preserve data utility.

PROPOSED SYSTEM:

We propose a novel idea to preserve important graph properties, such as distances between nodes by adding certain “noise” nodes into a graph. This idea is based on the following key observation.

In Our proposed system, privacy preserving goal is to prevent an attacker from reidentifying a user and finding the fact that a certain user has a specific sensitive value. To achieve this goal, we define a k-degree-l-diversity (KDLD) model for safely publishing a labeled graph, and then develop corresponding graph anonymization algorithms with the least distortion to the properties of the original graph, such as degrees and distances between nodes.

ADVANTAGES OF PROPOSED SYSTEM:

v We combine k-degree anonymity with l-diversity to prevent not only the reidentification of individual nodes but also the revelation of a sensitive attribute associated with each node.

v We propose a novel graph construction technique which makes use of noise nodes to preserve utilities of the original graph. Two key properties are considered: 1) Add as few noise edges as possible; 2) Change the distance between nodes as less as possible.

v We present analytical results to show the relationship between the number of noise nodes added and their impacts on an important graph property.

SYSTEM ARCHITECTURE:

ALGORITHMS USED:

SYSTEM CONFIGURATION:-

HARDWARE CONFIGURATION:-

ü Processor             –        Pentium –IV

ü Speed                             –        1.1 Ghz

ü RAM                    –        256 MB(min)

ü Hard Disk            –        20 GB

ü Key Board            –        Standard Windows Keyboard

ü Mouse                  –        Two or Three Button Mouse

ü Monitor                –        SVGA

 

SOFTWARE CONFIGURATION:-

ü Operating System                    : Windows XP

ü Programming Language           : JAVA

ü Java Version                           : JDK 1.6 & above.

REFERENCE:

Mingxuan Yuan, Lei Chen, Member, IEEE, Philip S. Yu, Fellow, IEEE, and Ting Yu-“Protecting Sensitive Labels in Social Network Data Anonymization”-IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. 25, NO. 3, MARCH 2013.

m-Privacy for Collaborative Data Publishing

m-Privacy for Collaborative Data Publishing

ABSTRACT:

In this paper, we consider the collaborative data publishing problem for anonymizing horizontally partitioned data at multiple data providers. We consider a new type of “insider attack” by colluding data providers who may use their own data records (a subset of the overall data) to infer the data records contributed by other data providers. The paper addresses this new threat, and makes several contributions. First, we introduce the notion of m-privacy, which guarantees that the anonymized data satisfies a given privacy constraint against any group of up to m colluding data providers. Second, we present heuristic algorithms exploiting the monotonicity of privacy constraints for efficiently checking m-privacy given a group of records. Third, we present a data provider-aware anonymization algorithm with adaptive m-privacy checking strategies to ensure high utility and m-privacy of anonymized data with efficiency. Finally, we propose secure multi-party computation protocols for collaborative data publishing with m-privacy. All protocols are extensively analyzed and their security and efficiency are formally proved. Experiments on real-life datasets suggest that our approach achieves better or comparable utility and efficiency than existing and baseline algorithms while satisfying m-privacy.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

Most work has focused on a single data provider setting and considered the data recipient as an attacker. A large body of literature assumes limited background knowledge of the attacker, and defines privacy using relaxed adversarial notion by considering specific types of attacks. Representative principles include k-anonymity, ldiversity, and t-closeness. A few recent works have modeled the instance level background knowledge as corruption, and studied perturbation techniques under these syntactic privacy notions

DISADVANTAGES OF EXISTING SYSTEM:

1. Collaborative data publishing can be considered as a multi-party computation problem, in which multiple providers wish to compute an anonymized view of their data without disclosing any private and sensitive information

2. The problem of inferring information from anonymized data has been widely studied in a single data provider setting. A data recipient that is an attacker, e.g., P0, attempts to infer additional information about data records using the published data, T , and background knowledge, BK.

PROPOSED SYSTEM:

We consider the collaborative data publishing setting with horizontally partitioned data across multiple data providers, each contributing a subset of records Ti. As a special case, a data provider could be the data owner itself who is contributing its own records. This is a very common scenario in social networking and recommendation systems. Our goal is to publish an anonymized view of the integrated data such that a data recipient including the data providers will not be able to compromise the privacy of the individual records provided by other parties.

ADVANTAGES OF PROPOSED SYSTEM:

 

Compared to our preliminary version, our new contributions extend above results. First, we adapt privacy verification and anonymization mechanisms to work for m-privacy with respect to any privacy constraint, including nonmonotonic ones. We list all necessary privacy checks and prove that no fewer checks are enough to confirm m-privacy. Second, we propose SMC protocols for secure m-privacy verification and anonymization. For all protocols we prove their security, complexity and experimentally confirm their efficiency.

Modules:

  1. Patient Registration
  2. Attacks by External Data Recipient Using Anonymized  Data
  3. Attacks by Data Providers Using Anonymized Data and Their Own Data
  4. Doctor Login
  5. Admin Login

Modules Description

Patient Registration:

In this module if patients have to take treatment, he/she should register their details like Name, Age, and Disease they get affected, Email etc. These details are maintained in a Database by the Hospital management. Only Doctors can see all their details. Patient can only see his own record.

Based on this paper:

When the data are distributed among multiple data providers or data owners, two main settings are used for anonymization. One approach is for each provider to anonymize the data independently (anonymize-and-aggregate, Figure 1A), which results in potential loss of integrated data utility. A more desirable approach is collaborative data publishing which anonymize data from all Providers as if they would come from one source (aggregate-and-anonymize, Figure 1B), using either a trusted third-party(TTP) or Secure Multi-party Computation (SMC) protocols to do computations .

Attacks by External Data Recipient Using Anonymized Data:

A data recipient, e.g. P0, could be an attacker and attempts to infer additional information about the records using the published data (T∗) and some background knowledge (BK) such as publicly available external data.

Attacks by Data Providers Using Anonymized Data and Their Own Data:

 Each data provider, such as P1 in Figure 1, can also use anonymized data T∗ and his own data (T1) to infer additional information about other records. Compared to the attack by the external recipient in the first attack scenario, each provider has additional data knowledge of their own records, which can help with the attack. This issue can be further worsened when multiple data providers collude with each other.

                                                      FIGURE 1

                                                          FIGURE: 2

Doctor Login:

                          In this module Doctor can see all the patients details and will get the background knowledge(BK),by the chance he will see horizontally partitioned data of distributed data base of the group of hospitals and can see how many patients are affected without knowing of individual records of the patients and sensitive information about the individuals.

Admin Login:

                          In this module Admin acts as Trusted Third Party (TTP).He can see all individual records and their sensitive information among the overall hospital distributed data base. Anonymation can be done by this people. He/She collected information’s from various hospitals and grouped into each other and make them as an anonymized data.

SYSTEM ARCHITECTURE:

ALGORITHM USED

SYSTEM CONFIGURATION:-

HARDWARE CONFIGURATION:-

ü Processor             –        Pentium –IV

ü Speed                             –        1.1 Ghz

ü RAM                    –        256 MB(min)

ü Hard Disk            –        20 GB

ü Key Board            –        Standard Windows Keyboard

ü Mouse                  –        Two or Three Button Mouse

ü Monitor                –        SVGA

 

SOFTWARE CONFIGURATION:-

ü Operating System                    : Windows XP

ü Programming Language           : JAVA

ü Java Version                           : JDK 1.6 & above.

REFERENCE:

Slawomir Goryczka, Li Xiong, and Benjamin C. M. Fung-“m-Privacy for Collaborative Data Publishing”- IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING” 2013.