Delegated Authorization Framework for HER Services using Attribute Based Encryption

Delegated Authorization Framework for HER Services using Attribute Based Encryption

ABSTRACT:

Medical organizations find it challenging to adopt cloud-based Electronic Health Records (EHR) services due to the risk of data breaches and the resulting compromise of patient data. Existing authorization models follow a patient-centric approach for HER management, where the responsibility of authorizing data access is handled at the patients end. This creates a significant overhead for the patient who must authorize every access of their health record. This is not practical given that multiple personnel are typically involved in providing care and that the patient may not always be in a state to provide this authorization. Hence there is a need to develop a proper authorization delegation mechanism for safe, secure and easy to use cloud-based EHR Service management. We present a novel, centralized, attribute-based authorization mechanism that uses Attribute Based Encryption (ABE) and allows for delegated secure access of patient records. This mechanism transfers the service management overhead from the patient to the medical organization and allows easy delegation of cloud-based EHRs access authority to medical providers.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium I3 Processor.
  • Hard Disk : 500 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram : 2 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language : Android,JAVA
  • Toolkit : Android 2.3 ABOVE
  • IDE : Eclipse/Android Studio

REFERENCE:

Maithilee Joshi, Karuna P. Joshi and Tim Finin, “Delegated Authorization Framework for HER Services using Attribute Based Encryption”, IEEE Transactions on Services Computing, 2019.

Building and Studying a Password Store that Perfectly Hides Passwords from Itself

Building and Studying a Password Store that Perfectly Hides Passwords from Itself

ABSTRACT:

We introduce a novel approach to password management, called SPHINX, which remains secure even when the password manager itself has been compromised. In SPHINX, the information stored on the device is theoretically independent of the user’s master password. Moreover, an attacker with full control of the device, even at the time the user interacts with it, learns nothing about the master password – the password is not entered into the device in plaintext form or in any other way that may leak information on it. Unlike existing managers, SPHINX produces strictly high-entropy passwords and makes it compulsory for the users to register these passwords with the web services, which defeats online guessing attacks and offline dictionary attack upon service compromise. We present the design, implementation and performance evaluation of SPHINX, offering prototype browser plugins, smartphone apps and transparent device-client communication. We further provide a comparative analytical evaluation of SPHINX with other password managers based on a formal framework consisting of security, usability, and deployability metrics.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium I3 Processor.
  • Hard Disk : 500 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram : 2 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language : Android,JAVA
  • Toolkit : Android 2.3 ABOVE
  • IDE : Eclipse/Android Studio

REFERENCE:

Maliheh Shirvanian_, Nitesh Saxena_, Stanislaw Jareckiy, and Hugo Krawczyk, “Building and Studying a Password Store that Perfectly Hides Passwords from Itself”, IEEE Transactions on Dependable and Secure Computing, 2019.

Tram location and route navigation system using Smartphone

Tram location and route navigation system using Smartphone

ABSTRACT

It is very important to reduce passenger waiting time at tram stops, when tram time tables are unknown by passengers. In order to accomplish that, we propose a tram location and route navigation system by using smartphones. The system is able to easily retrieve information about trams’ locations by GPS, also providing users with the shortest walking route to the nearest tram station. For evaluating the usefulness of this system, we perform a demonstration experiment on an actual “Centrum” LRT system in the center of Toyama city.

ARCHITECTURE:

EXISTING SYSTEM:

GIS a more complex mapping technology that is connected to a particular database. Because it’s generic, it is a broader term than the GPS in its technical sense. Thus, GIS is a computer program or application that is utilized to view and handle data about geographic locations and spatial correlations among others. It simply gives the user a framework to obtain information.

DISADVANTAGES OF EXISTING SYSTEM:

  • Merging of cartography, statistical analysis, and database technology.
  • Complex mapping technology
  • No wrong root indication
  • No nearest indication

PROPOSED SYSTEM:

 In order to reduce waiting time at tram stops, passengers would have to obtain live time tables for any tram stop. To achieve this complicated task, we propose a new tram location and route navigation system by using ICT (Information and Communication Technology). The system relays data about the current location of a tram to the smart phone of a tram user.

ADVANTAGES OF PROPOSED SYSTEM:

 By using this system, we can expect an improvement of user convenience especially for trams lines that have long operational intervals.

The system can also be used for a variety of public transportation applications.

MODULES:

  • Administrator Module
  • Passenger Login Module
  • Passenger Registration Module
  • Train Search Module
  • Ticket Reservation Module
  • Train Tracking Module

MODULE DESCRIPTION:

Administrator Login

The whole system is controlled by an administrator, administrator login into system by giving his authentication details such as username and password. After login into the system, he can see the trains currently available to the passengers. The train details are Train name, departure, destination, seat availability, and running days. And administrator can also add a new train into the databases.

Passenger Login

In this module, the user can login into the system by providing their credential, if a user is new to this application, and don’t have their credential details such as username and password; he can register as a new member in this system by registering.

Passenger Registration

If any user doesn’t have username and password to login into the system, then he can choose to register as a new member by choosing register option. He prompt to give his personal and contact information such as name, address, phone number, email id, and he can choose his own username and password. If registration is success then the user can login into the system, by username and password chosen by him/her

Train Search

After successfully login into system, passenger can search the available trains by their requirements. The requirements may departure, destination, journey date. The list of available trains is shown to the user. Then user may select any train and make ticket reservation. If no train is available, then user may change the journey date, departure, or destination.

Ticket Reservation Module

If the journey date, destination and departure is match for a train then the passenger can select the particular train, after selecting the particular train, user will get the trains details and seat availability in each class, the classes will be AC, Sleeper and seater class. User can select any class, and input the number of seats to reserve, if the user selected seats not available then he prompt to give only select seat less than or equal to available seats. After selecting no. of seats, user can make payment, when he ready to pay, the details of reservation will be shown to the user such as class, number of seats, total amount. Then the user may confirm or cancel the payment. If he confirms the payment then only the ticket will be reserved for that passenger, otherwise it will be open to all.

Train Tracking

The passenger has the options to track the Trains in real time. Trains physical location will show in the map with the place currently train is travelling. Passenger can select particular train, and then train details such as previous station, next static, train started date and expected time to reach the next station are shown to the user. The route covered by the train is shown as a yellow line, and route to be covered will show as the dotted yellow line. The trains currently running on time will be shown in blue color, and trains currently running late will be shown in red color.

HARDWARE REQUIREMENT

CPU type                      :    Intel Pentium 4

Clock speed                   :    3.0 GHz

Ram size                       :    512 MB

Hard disk capacity         :    40 GB

Monitor type                 :    15 Inch color monitor

Keyboard type               :     internet keyboard

 SOFTWARE REQUIREMENT 

Operating System   :  Android

Language              :  ANDROID SDK 2.3

Back End                       :    SQLite

Documentation      :    Ms-Office

REFERENCE:

 Kunimitsu Fujita, Masaya Kato, Tatsuya Furukane, Keiji Shibata, Yuukou Horita, Member, IEEE University of Toyama, Toyama, Japan, “Tram Location and Route Navigation System using Smartphone”, 2012 IEEE CONFERENCE.

Network Assisted Mobile Computing Optimal Uplink Query Processing

Network Assisted Mobile Computing Optimal Uplink Query Processing

ABSTRACT:

Many mobile applications retrieve content from remote servers via user generated queries. Processing these queries is often needed before the desired content can be identified. Processing the request on the mobile devices can quickly sap the limited battery resources. Conversely, processing user-queries at remote servers can have slow response times due communication latency incurred during transmission of the potentially large query. We evaluate a network-assisted mobile computing scenario where mid-network nodes with “leasing” capabilities are deployed by a service provider. Leasing computation power can reduce battery usage on the mobile devices and improve response times. However, borrowing processing power from mid-network nodes comes at a leasing cost which must be accounted for when making the decision of where processing should occur. We study the tradeoff between battery usage, processing and transmission latency, and mid-network leasing. We use the dynamic programming framework to solve for the optimal processing policies that suggest the amount of processing to be done at each mid-network node in order to minimize the processing and communication latency and processing costs. Through numerical studies, we examine the properties of the optimal processing policy and the core tradeoffs in such systems.

EXISTING SYSTEM:

 In the previous section we identified special properties of the optimal processing policy under various scenarios. We now examine some of these properties through numerical studies with example cost functions and systems. Latency, battery usage, and leasing costs have a tightly woven relationship.

Disadvantages:

1) Increasing battery usage will decrease latency and leasing costs, but also limits the lifetime of the mobile device

2) Conversely, the lifetime of the device can be extended by increasing leasing costs which will decrease latency and battery usage.

PROPOSED SYSTEM:

 A user request originates at the Mobile Station (MS). In order to be completed, the request must be transmitted upstream to a remote Application Server (AS) via a Base Station (BS) and a series of relay nodes. We refer to the node at the first hop as the base station, but emphasize that the links between the BS, relay nodes, and AS may be wired or wireless. Similarly running a text to speech conversion application for usage scenarios.

Advantages: 

1) If the request processing is entirely done at the MS, the limited battery power can be drained.

2) If the processing is done at the AS, communication latency can be high due to limited bandwidth of the wireless access link and large query size.

MODULES:

  1. Application Server Module
  2. Leasing Model
  3. Relaying Strategies
  4. Multi-hop Transmission

MODULES DESCRIPTION:

Application Server Module

In this module, the application server concept is built. Where the server has the options to upload any new data or file. The Server admin or authorized person uploads the data in the server. Once the data has been uploaded, the list can be viewed by the Mobile users. In module uses Webservices to integrate Application server and mobile. Application server module is build through PHP and MYSQL.

Leasing Model: 

Utilizing the processing power of intermediary nodes is the main idea behind Network-Assisted Mobile Computing. Leasing processing power from mid-network nodes can be extremely beneficial to reduce latency and to extend the battery life of a mobile device. However, it comes with a cost. These costs can capture the fee required to lease CPU power from the mid-network nodes. Additionally, these costs may capture potential security risks by giving access of client data to these nodes. Some operations, such as transcoding, can be done on

Encrypted data, while other would require decrypting the data. The mobile station send one sentence for ex: (how are you), in the application server receive the sentence into audio.

Relaying Strategies:

  • Amplify-and-forward
  • Decode-and-forward

In amplify-and-forward, the relay nodes simply boost the energy of the signal received from the sender and retransmit it to the receiver. In decode-and-forward, the relay nodes will perform physical-layer decoding and then forward the decoding result to the destinations. If multiple nodes are available for cooperation, their antennas can employ a space-time code in transmitting the relay signals. It is shown that cooperation at the physical layer can achieve full levels of diversity similar to a system, and hence can reduce the interference and increase the connectivity of wireless networks.

 Multi-hop Transmission:

Multi-hop transmission can be illustrated using two-hop transmission. When two-hop transmission is used, two time slots are consumed. In the first slot, messages are transmitted from the mobile station to the relay, and the messages will be forwarded to the Application Server in the second slot. The outage capacity of this two-hop transmission can be derived considering the outage of each hop transmission.

SYSTEM MODELS

HARDWARE REQUIREMENT

CPU type                      :    Intel Pentium 4

Clock speed                   :    3.0 GHz

Ram size                       :    512 MB

Hard disk capacity         :    40 GB

Monitor type                 :    15 Inch color monitor

Keyboard type               :     internet keyboard

Mobile                            :    ANDROID MOBILE

SOFTWARE REQUIREMENT

Operating System:  Android

Language           :  ANDROID SDK 2.3

Documentation   :    Ms-Office

REFERENCE:

Carri W. Chan, Nicholas Bambos, and Jatinder Singh, “Network Assisted Mobile Computing with Optimal Uplink Query Processing”, IEEE TRANSACTIONS ON MOBILE COMPUTING, 2012.

Ensuring Distributed Accountability for Data Sharing in the Cloud

Ensuring Distributed Accountability for Data Sharing in the Cloud

ABSTRACT:

Cloud computing enables highly scalable services to be easily consumed over the Internet on an as-needed basis. A major feature of the cloud services is that users’ data are usually processed remotely in unknown machines that users do not own or operate. While enjoying the convenience brought by this new emerging technology, users’ fears of losing control of their own data (particularly, financial and health data) can become a significant barrier to the wide adoption of cloud services. To address this problem, in this paper, we propose a novel highly decentralized information accountability framework to keep track of the actual usage of the users’ data in the cloud. In particular, we propose an object-centered approach that enables enclosing our logging mechanism together with users’ data and policies. We leverage the JAR programmable capabilities to both create a dynamic and traveling object, and to ensure that any access to users’ data will trigger authentication and automated logging local to the JARs. To strengthen user’s control, we also provide distributed auditing mechanisms. We provide extensive experimental studies that demonstrate the efficiency and effectiveness of the proposed approaches.

EXISTING SYSTEM:

To allay users’ concerns, it is essential to provide an effective mechanism for users to monitor the usage of their data in the cloud. For example, users need to be able to ensure that their data are handled according to the service level agreements made at the time they sign on for services in the cloud. Conventional access control approaches developed for closed domains such as databases and operating systems, or approaches using a centralized server in distributed environments, are not suitable, due to the following features characterizing cloud environments.

PROBLEMS ON EXISTING SYSTEM:

First, data handling can be outsourced by the direct cloud service provider (CSP) to other entities in the cloud and theses entities can also delegate the tasks to others, and so on.

Second, entities are allowed to join and leave the cloud in a flexible manner. As a result, data handling in the cloud goes through a complex and dynamic hierarchical service chain which does not exist in conventional environments.

PROPOSED SYSTEM:

We propose a novel approach, namely Cloud Information Accountability (CIA) framework, based on the notion of information accountability. Unlike privacy protection technologies which are built on the hide-it-or-lose-it perspective, information accountability focuses on keeping the data usage transparent and trackable. Our proposed CIA framework provides end-toend accountability in a highly distributed fashion. One of the main innovative features of the CIA framework lies in its ability of maintaining lightweight and powerful accountability that combines aspects of access control, usage control and authentication. By means of the CIA, data owners can track not only whether or not the service-level agreements are being honored, but also enforce access and usage control rules as needed. Associated with the accountability feature, we also develop two distinct modes for auditing: push mode and pull mode. The push mode refers to logs being periodically sent to the data owner or stakeholder while the pull mode refers to an alternative approach whereby the user (or another authorized party) can retrieve the logs as needed.

Our main contributions are as follows:

  • We propose a novel automatic and enforceable logging mechanism in the cloud.
  • Our proposed architecture is platform independent and highly decentralized, in that it does not require any dedicated authentication or storage system in place.
  • We go beyond traditional access control in that we provide a certain degree of usage control for the protected data after these are delivered to the receiver.
  • We conduct experiments on a real cloud testbed. The results demonstrate the efficiency, scalability, and granularity of our approach. We also provide a detailed security analysis and discuss the reliability and strength of our architecture.

IMPLEMENTATION:

Implementation is the stage of the project when the theoretical design is turned out into a working system. Thus it can be considered to be the most critical stage in achieving a successful new system and in giving the user, confidence that the new system will work and be effective.

      The implementation stage involves careful planning, investigation of the existing system and it’s constraints on implementation, designing of methods to achieve changeover and evaluation of changeover methods.

 MAIN MODULES:-

  1. DATA OWNER MODULE
  2. JAR CREATION MODULE
  3. CLOUD SERVICE PROVIDER MODULE
  4. Disassembling Attack
  5. Man-in-the-Middle Attack

MODULES DESCRIPTION:-

  1. DATA OWNER MODULE

In this module, the data owner uploads their data in the cloud server. The new users can register with the service provider and create a new account and so they can securely upload the files and store it. For the security purpose the data owner encrypts the data file and then store in the cloud. The Data owner can have capable of manipulating the encrypted data file. And the data owner can set the access privilege to the encrypted data file. To allay users’ concerns, it is essential to provide an effective mechanism for users to monitor the usage of their data in the cloud. For example, users need to be able to ensure that their data are handled according to the service level agreements made at the time they sign on for services in the cloud. 

  1. JAR CREATION MODULE

In this module we create the jar file for every file upload. The user should have the same jar file to download the file. This way the data is going to be secured.The logging should be decentralized in order to adapt to the dynamic nature of the cloud. More specifically, log files should be tightly bounded with the corresponding data being controlled, and require minimal infrastructural support from any server. Every access to the user’s data should be correctly and automatically logged. This requires integrated techniques to authenticate the entity who accesses the data, verify, and record the actual operations on the data as well as the time that the data have been accessed. Log files should be reliable and tamper proof to avoid illegal insertion, deletion, and modification by malicious parties. Recovery mechanisms are also desirable to restore damaged log files caused by technical problems. The proposed technique should not intrusively monitor data recipients’ systems, nor it should introduce heavy communication and computation overhead, which otherwise will hinder its feasibility and adoption in practice. 

  1. CLOUD SERVICE PROVIDER MODULE

The cloud service provider manages a cloud to provide data storage service. Data owners encrypt their data files and store them in the cloud with the jar file created for each file for sharing with data consumers. To access the shared data files, data consumers download encrypted data files of their interest from the cloud and then decrypt them. 

  1. DISASSEMBLING ATTACK

In this module we show how our system is secured by evaluating to possible attacks to disassemble the JAR file of the logger and then attempt to extract useful information out of it or spoil the log records in it. Given the ease of disassembling JAR files, this attack poses one of the most serious threats to our architecture. Since we cannot prevent an attacker to gain possession of the JARs, we rely on the strength of the cryptographic schemes applied to preserve the integrity and confidentiality of the logs. Once the JAR files are disassembled, the attacker is in possession of the public IBE key used for encrypting the log files, the encrypted log file itself, and the *.class files. Therefore, the attacker has to rely on learning the private key or subverting the encryption to read the log records. To compromise the confidentiality of the log files, the attacker may try to identify which encrypted log records correspond to his actions by mounting a chosen plaintext attack to obtain some pairs of encrypted log records and plain texts. However, the adoption of the Weil Pairing algorithm ensures that the CIA framework has both chosen ciphertext security and chosen plaintext security in the random oracle model. Therefore, the attacker will not be able to decrypt any data or log files in the disassembled JAR file. Even if the attacker is an authorized user, he can only access the actual content file but he is not able to decrypt any other data including the log files which are viewable only to the data owner.1 From the disassembled JAR files, the attackers are not able to directly view the access control policies either, since the original source code is not included in the JAR files. If the attacker wants to infer access control policies, the only possible way is through analyzing the log file. This is, however, very hard to accomplish since, as mentioned earlier, log records are encrypted and breaking the encryption is computationally hard. Also, the attacker cannot modify the log files extracted from a disassembled JAR. Would the attacker erase or tamper a record, the integrity checks added to each record of the log will not match at the time of verification, revealing the error. Similarly, attackers will not be able to write fake records to log files without going undetected, since they will need to sign with a valid key and the chain of hashes will not match. 

  1. Man-in-the-Middle Attack

In this module, an attacker may intercept messages during the authentication of a service provider with the certificate authority, and reply the messages in order to masquerade as a legitimate service provider. There are two points in time that the attacker can replay the messages. One is after the actual service provider has completely disconnected and ended a session with the certificate authority. The other is when the actual service provider is disconnected but the session is not over, so the attacker may try to renegotiate the connection. The first type of attack will not succeed since the certificate typically has a time stamp which will become obsolete at the time point of reuse. The second type of attack will also fail since renegotiation is banned in the latest version of OpenSSL and cryptographic checks have been added.

SYSTEM MODELS

HARDWARE REQUIREMENT

CPU type                      :    Intel Pentium 4

Clock speed                   :    3.0 GHz

Ram size                       :    512 MB

Hard disk capacity         :    40 GB

Monitor type                 :    15 Inch color monitor

Keyboard type               :     internet keyboard

Mobile                            :    ANDROID MOBILE

 

SOFTWARE REQUIREMENT

Operating System:  Android

Language           :  ANDROID SDK 2.3

Documentation   :    Ms-Office

REFERENCE:

Smitha Sundareswaran, Anna C. Squicciarini and Dan Lin, “Ensuring Distributed Accountability for Data Sharing in the Cloud”, IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, VOL. 9, NO.4, JULY/AUGUST, 2012.

Efficient audit service outsourcing for data integrity in clouds

Efficient audit service outsourcing for data integrity in clouds

ABSTRACT:

Cloud-based outsourced storage relieves the client’s burden for storage management and maintenance by providing a comparably low-cost, scalable, location-independent platform. However, the fact that clients no longer have physical possession of data indicates that they are facing a potentially formidable risk for missing or corrupted data. To avoid the security risks, audit services are critical to ensure the integrity and availability of outsourced data and to achieve digital forensics and credibility on cloud computing. Provable data possession (PDP), which is a cryptographic technique for verifying the integrity of data without retrieving it at an untrusted server, can be used to realize audit services. In this paper, profiting from the interactive zero-knowledge proof system, we address the construction of an interactive PDP protocol to prevent the fraudulence of prover (soundness property) and the leakage of verified data (zero-knowledge property). We prove that our construction holds these properties based on the computation Diffie–Hellman assumption and the rewindable black-box knowledge extractor. We also propose an efficient mechanism with respect to probabilistic queries and periodic verification to reduce the audit costs per verification and implement abnormal detection timely. In addition, we present an efficient method for selecting an optimal parameter value to minimize computational overheads of cloud audit services. Our experimental results demonstrate the effectiveness of our approach.

EXISTING SYSTEM 

There exist various tools and technologies for multicloud, such as Platform VM Orchestrator, VMwarevSphere, and Ovirt. These tools help cloud providers construct a distributed cloud storage platform for managing clients’ data. However, if such an important platform is vulnerable to security attacks, it would bring irretrievable losses to the clients. For example, the confidential data in an enterprise may be illegally accessed through a remote interface provided by a multi-cloud, or relevant data and archives may be lost or tampered with when they are stored into an∙ uncertain storage pool outside the enterprise. Therefore, it is indispensable for cloud service providers to provide security techniques for managing their storage services.

PROPOSED SYSTEM

To check the availability and integrity of outsourced data in cloud storages, researchers have proposed two basic approaches called Provable Data Possession and Proofs of Retrievability .Ateniese et al. first proposed the PDP model for ensuring possession of files on untrusted storages and provided an RSA-based scheme for a static case that achieves the communication cost. They also proposed a publicly verifiable version, which allows anyone, not just the owner, to challenge the server for data possession..They proposed a lightweight PDP scheme based on cryptographic hash function and symmetric key encryption, but the servers can deceive the owners by using previous metadata or responses due to the lack of randomness in the challenges. The numbers of updates and challenges are limited and fixed in advance and users cannot perform block insertions anywhere.

 MODULES:

  • Multi cloud storage
  • Cooperative PDP
  • Data Integrity
  • Third Party Auditor
  • Cloud User

MODULE DESCRIPTION:

Multi cloud storage

Distributed computing is used to refer to any large collaboration in which many individual personal computer owners allow some of their computer’s processing time to be put at the service of a large problem. In our system the each cloud admin consist of data blocks. The cloud user uploads the data into multi cloud. cloud computing environment is constructed based on open architectures and interfaces, it has the capability to incorporate multiple internal and/or external cloud services together to provide high interoperability. We call such a distributed cloud environment as a multi-Cloud .A multi-cloud allows clients to easily access his/her resources remotely through interfaces.

 Cooperative PDP

Cooperative PDP (CPDP) schemes adopting zero-knowledge property and     three-layered index hierarchy, respectively. In particular efficient method for selecting the optimal number of sectors in each block to minimize the computation costs of clients and storage service providers. Cooperative PDP (CPDP) scheme without compromising data privacy based on modern cryptographic techniques

 Data Integrity

Data Integrity is very important in database operations in particular and Data warehousing and Business intelligence in general. Because Data Integrity ensured that data is of high quality, correct, consistent and accessible.

Third Party Auditor

Trusted Third Party (TTP) who is trusted to store verification parameters and offer public query services for these parameters. In our system the Trusted Third Party, view the user data blocks and uploaded to the distributed cloud. In distributed cloud environment each cloud has user data blocks. If any Modification tried by cloud owner an alert is send to the Trusted Third Party.

 Cloud User

The Cloud User who has a large amount of data to be stored in multiple clouds and have the permissions to access and manipulate stored data. The User’s Data is converted into data blocks. The data blocks are uploaded to the cloud. The TPA views the data blocks and Uploaded in multi cloud. The user can update the uploaded data. If the user wants to download their files, the data’s in multi cloud is integrated and downloaded.

HARDWARE REQUIREMENT

CPU type                      :    Intel Pentium 4

Clock speed                   :    3.0 GHz

Ram size                       :    512 MB

Hard disk capacity         :    40 GB

Monitor type                 :    15 Inch color monitor

Keyboard type               :     internet keyboard

Mobile                            :    ANDROID MOBILE

SOFTWARE REQUIREMENT

Operating System:  Android

Language           :  ANDROID SDK 2.3

Documentation   :    Ms-Office

REFERENCE:

Yan Zhu, Hongxin Hu, Gail-Joon Ahn, Stephen S. Yau, “Efficient audit service outsourcing for data integrity in clouds”, ELSEVIER, 2012.

Defenses Against Large Scale Online Password Guessing Attacks By Using Persuasive Click Points

Defenses Against Large Scale Online Password Guessing Attacks By Using Persuasive Click Points

ABSTRACT:

Usable security has unique usability challenges because the need for security often means that standard human-computer-interaction approaches cannot be directly applied. An important usability goal for authentication systems is to support users in selecting better passwords. Users often create memorable passwords that are easy for attackers to guess, but strong system-assigned passwords are difficult for users to remember. So researchers of modern days have gone for alternative methods wherein graphical pictures are used as passwords. Graphical passwords essentially use images or representation of images as passwords. Human brain is good in remembering picture than textual character. There are various graphical password schemes or graphical password software in the market. However, very little research has been done to analyze graphical passwords that are still immature. There for, this project work merges persuasive cued click points and password guessing resistant protocol. The major goal of this work is to reduce the guessing attacks as well as encouraging users to select more random, and difficult passwords to guess. Well known security threats like brute force attacks and dictionary attacks can be successfully abolished using this method.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM: 

          Existing approaches to Users often create memorable passwords that are easy for attackers to guess, but strong system-assigned passwords are difficult for users to remember. Despite the vulnerabilities, it’s the user natural tendency of the users that they will always prefer to go for short passwords for ease of remembrance and also lack of awareness about how attackers tend to attacks. Unfortunately, these passwords are broken mercilessly by intruders by several simple means such as masquerading, Eaves dropping and other rude means say dictionary attacks, shoulder surfing attacks, social engineering attacks.

Disadvantage:

  1. The strong system-assigned passwords are difficult for users to remember.

 PROPOSED SYSTEM:

          We propose is to reduce the guessing attacks as well as encouraging users to select more random, and difficult passwords to guess. The proposed system work merges persuasive cued click points and password guessing resistant protocol.

ADVANTAGE:

  1. Human brain is good in remembering picture than textual character.

MODULES:

  • Configuration module
  • Login
  • Persuasive Cued Click-Point

MODULE DESCRIPTION:

Configuration

First of all the user should configure their password pattern, based on Redundancy input, user is allowed to touch on any point in an image, out system save the images and Cued Click- Point. This process is repeated for 5 images, the images are randomly showed to the user at the time of configuration, so user itself don’t know the images before configuring their pattern, Then the user can save the pattern he/she defined in the configuration screen

Login

At the time of login the first image in the user defined pattern is retrieved and shown the to the person who is trying to access, then he allowed to touch on a point in that image, if the Click-Point is matched with already saved Cued Click- Point for that image, if its correct then proceeds to next image, otherwise a random image will be shown to person. The user know the correct images sequence, so he know that he was touched wrong point in previous image, so he can manage to login, but the intruder can continue to login, but he can’t succeeded.

Persuasive Cued Click-Point

To address the issue of hotspots, a password consists of five clickpoints, one on each of five images. During password creation, most of the image is dimmed except for a small view port area that is randomly positioned on  the image. Users must select a click-point within the view port. If they are unable or unwilling to select a point in the current view port, they may press the Shuffle button to randomly reposition the view port. The view port guides users to select more random passwords that are less likely to include hotspots. A user who is determined to reach a certain click-point may still shuffle until the view port moves to the specific location, but this is a time consuming and more tedious process.

SYSTEM MODELS

HARDWARE REQUIREMENT

CPU type                      :    Intel Pentium 4

Clock speed                   :    3.0 GHz

Ram size                       :    512 MB

Hard disk capacity         :    40 GB

Monitor type                 :    15 Inch color monitor

Keyboard type               :     internet keyboard

Mobile                            :    ANDROID MOBILE

SOFTWARE REQUIREMENT

Operating System:  Android

Language           :  ANDROID SDK 2.3

Documentation   :    Ms-Office

REFERENCE:

Chippy, R. Nagendran, “Defenses against large scale online password guessing attacks by using persuasive click points”, International Journal of Communications and Engineering, Volume 03-No.3, Issue: 01 March 2012.

Ranking Model Adaptation for Domain-Specific Search

Ranking Model Adaptation for Domain-Specific Search

ABSTRACT:

With the explosive emergence of vertical search domains, applying the broad-based ranking model directly to different domains is no longer desirable due to domain differences, while building a unique ranking model for each domain is both laborious for labeling data and time consuming for training models. In this paper, we address these difficulties by proposing a regularization-based algorithm called ranking adaptation SVM (RA-SVM), through which we can adapt an existing ranking model to a new domain, so that the amount of labeled data and the training cost is reduced while the performance is still guaranteed. Our algorithm only requires the prediction from the existing ranking models, rather than their internal representations or the data from auxiliary domains. In addition, we assume that documents similar in the domain-specific feature space should have consistent rankings, and add some constraints to control the margin and slack variables of RA-SVM adaptively. Finally, ranking adaptability measurement is proposed to quantitatively estimate if an existing ranking model can be adapted to a new domain. Experiments performed over Letor and two large scale data sets crawled from a commercial search engine demonstrate the applicabilities of the proposed ranking adaptation algorithms and the ranking adaptability measurement.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM

The existing broad-based ranking model provides a lot of common information in ranking documents only few training samples are needed to be labeled in the new domain. From the probabilistic perspective, the broad-based ranking model provides a prior knowledge, so that only a small number of labeled samples are sufficient for the target domain ranking model to achieve the same confidence. Hence, to reduce the cost for new verticals, how to adapt the auxiliary ranking models to the new target domain and make full use of their domain-specific features, turns into a pivotal problem for building effective domain-specific ranking models.

 PROPOSED SYSTEM

Proposed System focus whether we can adapt ranking models learned for the existing broad-based search or some verticals, to a new domain, so that the amount of labeled data in the target domain is reduced while the performance requirement is still guaranteed, how to adapt the ranking model effectively and efficiently and how to utilize domain-specific features to further boost the model adaptation. The first problem is solved by the proposed rank-ing adaptability measure, which quantitatively estimates whether an existing ranking model can be adapted to the new domain, and predicts the potential performance for the adaptation. We address the second problem from the regularization framework and a ranking adaptation SVM algorithm is proposed. Our algorithm is a blackbox ranking model adaptation, which needs only the predictions from the existing ranking model, rather than the internal representation of the model itself or the data from the auxiliary domains. With the black-box adaptation property, we achieved not only the flexibility but also the efficiency. To resolve the third problem, we assume that documents similar in their domain specific feature space should have consistent rankings.

ADVANTAGES OF PROPOSED SYSTEM:

  1. Model adaptation.
  2. Reducing the labeling cost.
  3. Reducing the computational cost.

MODULES:

  1. Ranking Adaptation
  2. Explore Ranking adaptability
  3. Ranking adaptation with domain specific search Module.
  4. Ranking Support Vector Machine Module.

 MODULE DESCRIPTION:

1.Ranking adaptation Module:

Ranking adaptation is closely related to classifier adaptation, which has shown its effectiveness for many learning problems. Ranking adaptation is comparatively more challenging. Unlike classifier adaptation, which mainly deals with binary targets, ranking adaptation desires to adapt the model which is used to predict the rankings for a collection of domains. In ranking the relevance levels between different domains are sometimes different and need to be aligned. we can adapt ranking models learned for the existing broad-based search or some verticals, to a new domain, so that the amount of labeled data in the target domain is reduced while the performance requirement is still guaranteed and how to adapt the ranking model effectively and efficiently .Then how to utilize domain-specific features to further boost the model adaptation.

2.Explore Ranking adaptability Module:

Ranking adaptability measurement by investigating the correlation between two ranking lists of a labeled query in the target domain, i.e., the one predicted by fa and the ground-truth one labeled by human judges. Intuitively, if the two ranking lists have high positive correlation, the auxiliary ranking model fa is coincided with the distribution of the corresponding labeled data, therefore we can believe that it possesses high ranking adaptability towards the target domain, and vice versa. This is because the labeled queries are actually randomly sampled from the target domain for the model adaptation, and can reflect the distribution of the data in the target domain.

3.Ranking adaptation with domain specific search Module:

Data from different domains are also characterized by some domain-specific features, e.g., when we adopt the ranking model learned from the Web page search domain to the image search domain, the image content can provide additional information to facilitate the text based ranking model adaptation. In this section, we discuss how to utilize these domain-specific features, which are usually difficult to translate to textual representations directly, to further boost the performance of the proposed RA-SVM. The basic idea of our method is to assume that documents with similar domain-specific features should be assigned with similar ranking predictions. We name the above assumption as the consistency assumption, which implies that a robust textual ranking function should perform relevance prediction that is consistent to the domain-specific features.

4.Ranking Support Vector Machines Module:

Ranking Support Vector Machines (Ranking SVM), which is one of the most effective learning to rank algorithms, and is here employed as the basis of our proposed algorithm. the proposed RA-SVM does not need the labeled training samples from the auxiliary domain, but only its ranking model. Such a method is more advantageous than data based adaptation, because the training data from auxiliary domain may be missing or unavailable, for the copyright protection or privacy issue, but the ranking model is comparatively easier to obtain and access.

DATA FLOW DIAGRAM:

SYSTEM MODELS

HARDWARE REQUIREMENT

CPU type                      :    Intel Pentium 4

Clock speed                   :    3.0 GHz

Ram size                       :    512 MB

Hard disk capacity         :    40 GB

Monitor type                 :    15 Inch color monitor

Keyboard type               :     internet keyboard

Mobile                            :    ANDROID MOBILE

SOFTWARE REQUIREMENT

Operating System:  Android

Language           :  ANDROID SDK 2.3

Documentation   :    Ms-Office

REFERENCE:

Bo Geng, Linjun Yang, Chao Xu and Xian-Sheng Hua, “Ranking Model Adaptation for Domain-Specific Search”, IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL.24, NO.4, APRIL 2012.

Query Planning for Continuous Aggregation Queries over a Network of Data Aggregators

Query Planning for Continuous Aggregation Queries over a Network of Data Aggregators

ABSTRACT:

Continuous queries are used to monitor changes to time varying data and to provide results useful for online decision making. Typically a user desires to obtain the value of some aggregation function over distributed data items, for example, to know value of portfolio for a client; or the AVG of temperatures sensed by a set of sensors. In these queries a client specifies a coherency requirement as part of the query. We present a low-cost, scalable technique to answer continuous aggregation queries using a network of aggregators of dynamic data items. In such a network of data aggregators, each data aggregator serves a set of data items at specific coherencies. Just as various fragments of a dynamic webpage are served by one or more nodes of a content distribution network, our technique involves decomposing a client query into sub-queries and executing sub-queries on judiciously chosen data aggregators with their individual sub-query incoherency bounds. We provide a technique for getting the optimal set of sub-queries with their incoherency bounds which satisfies client query’s coherency requirement with least number of refresh messages sent from aggregators to the client. For estimating the number of refresh messages, we build a query cost model which can be used to estimate the number of messages required to satisfy the client specified incoherency bound. Performance results using real-world traces show that our cost-based query planning leads to queries being executed using less than one third the number of messages required by existing schemes.

SYSTEM MODELS

HARDWARE REQUIREMENT

CPU type                      :    Intel Pentium 4

Clock speed                   :    3.0 GHz

Ram size                       :    512 MB

Hard disk capacity         :    40 GB

Monitor type                 :    15 Inch color monitor

Keyboard type               :     internet keyboard

Mobile                            :    ANDROID MOBILE

SOFTWARE REQUIREMENT

Operating System:  Android

Language           :  ANDROID SDK 2.3

Documentation   :    Ms-Office

REFERENCE:

Rajeev Gupta and Krithi Ramamritham, “Query Planning for Continuous Aggregation Queries over a Network of Data Aggregators”, IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. 24, NO.6, JUNE 2012.

PMSE: A Personalized Mobile Search Engine

PMSE: A Personalized Mobile Search Engine

ABSTRACT:

We propose a personalized mobile search engine, PMSE, that captures the users’ preferences in the form of concepts by mining their click through data. Due to the importance of location information in mobile search, PMSE classifies these concepts into content concepts and location concepts. In addition, users’ locations (positioned by GPS) are used to supplement the location concepts in PMSE. The user preferences are organized in an ontology-based, multi-facet user profile, which are used to adapt a personalized ranking function for rank adaptation of future search results. To characterize the diversity of the concepts associated with a query and their relevances to the users need, four entropies are introduced to balance the weights between the content and location facets. Based on the client-server model, we also present a detailed architecture and design for implementation of PMSE. In our design, the client collects and stores locally the click-through data to protect privacy, whereas heavy tasks such as concept extraction, training and re-ranking are performed at the PMSE server. Moreover, we address the privacy issue by restricting the information in the user profile exposed to the PMSE server with two privacy parameters. We prototype PMSE on the Google Android platform. Experimental results show that PMSE significantly improves the precision comparing to the baseline.

EXISTING SYSTEM:

A major problem in mobile search is that the interactions between the users and search engines are limited by the small form factors of the mobile devices. As a result, mobile users tend to submit shorter, hence, more ambiguous queries compared to their web search counterparts. In order to return highly relevant results to the users, mobile search engines must be able to profile the users’ interests and personalize the search results according to the users’ profiles.

PROPOSED SYSTEM:

In this paper, we propose a realistic design for PMSE by adopting the metasearch approach which replies on one of the commercial search engines, such as Google, Yahoo or Bing, to perform an actual search. The client is responsible for receiving the user’s requests, submitting the requests to the PMSE server, displaying the returned results, and collecting his/her clickthroughs in order to derive his/her personal preferences. The PMSE server, on the other hand, is responsible for handling heavy tasks such as forwarding the requests to a commercial search engine, as well as training and reranking of search results before they are returned to the client. The user profiles for specific users are stored on the PMSE clients, thus preserving privacy to the users. PMSE has been prototyped with PMSE clients on the Google Android platform and the PMSE server on a PC server to validate the proposed ideas.

ARCHITECTURE:

MODULES:

  • Mobile Client
  • PMSE Server
  • Re-Rank Search Results
  • Ontology update and Clickthrough collection

MODULE DESCRIPTION:

Mobile Client:

          In the PMSE’s client-server architecture, PMSE clients are responsible for storing the user clickthroughs and the ontologies derived from the PMSE server. Simple tasks, such as updating clickthroughs and ontologies, creating feature vectors, and displaying re-ranked search results are handled by the PMSE clients with limited computational power. Moreover, in order to minimize the data transmission between client and server, the PMSE client would only need to submit a query together with the feature vectors to the PMSE server, and the server would automatically return a set of re-ranked search results according to the preferences stated in the feature vectors. The data transmission cost is minimized, because only the essential data (i.e., query, feature vectors, ontologies and search results) are transmitted between client and server during the personalization process.

PMSE Server:

          Heavy tasks, such as RSVM training and re-ranking of search results, are handled by the PMSE server. PMSE Server’s design addressed the issues: (1) limited computational power on mobile devices, and (2) data transmission minimization. PMSE consists of two major activities: 1) Re-ranking the search results at the PMSE server, and 2) Ontology update and clickthroughs collection at a mobile client.

Re-ranking the search results

When a user submits a query on the PMSE client, the query together with the feature vectors containing the user’s content and location preferences (i.e., filtered ontologies according to the user’s privacy setting) are forwarded to the PMSE server, which in turn obtains the search results from the backend search engine (i.e., Google). The content and location concepts are extracted from the search results and organized into ontologies to capture the relationships between the concepts. The server is used to perform ontology extraction for its speed. The feature vectors from the client are then used in RSVM training to obtain a content weight vector and a location weight vector, representing the user interests based on the user’s content and location preferences for the re-ranking. Again, the training process is performed on the server for its speed. The search results are then re-ranked according to the weight vectors obtained from the RSVM training. Finally, the re-ranked results and the extracted ontologies for the personalization of future queries are returned to the client

Ontology update and Clickthrough collection:

The ontologies returned from the PMSE server contain the concept space that models the relationships between the concepts extracted from the search results. They are stored in the ontology database on the client 1 . When the user clicks on a search result, the clickthrough data together with the associated content and location concepts are stored in the clickthrough database on the client. The clickthroughs are stored on the PMSE clients, so the PMSE server does not know the exact set of documents that the user has clicked on. This design allows user privacy to be preserved in certain degree. If the user is concerned with his/her own privacy, the privacy level can be set to high so that only limited personal information will be included in the feature vectors and passed along to the PMSE server for the personalization. On the other hand, if a user wants more accurate results according to his/her preferences; the privacy level can be set to low so that the PMSE server can use the full feature vectors to maximize the personalization effect.

SYSTEM MODELS

HARDWARE REQUIREMENT

CPU type                      :    Intel Pentium 4

Clock speed                   :    3.0 GHz

Ram size                       :    512 MB

Hard disk capacity         :    40 GB

Monitor type                 :    15 Inch color monitor

Keyboard type               :     internet keyboard

Mobile                            :    ANDROID MOBILE

SOFTWARE REQUIREMENT

Operating System:  Android

Language           :  ANDROID SDK 2.3

Documentation   :    Ms-Office

REFERENCE:

Kenneth Wai-Ting Leung, Dik Lun Lee, Wang-Chien Lee, “PMSE: A Personalized Mobile Search Engine”, IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2012