Energy Efficient Multipath Routing Protocol for Mobile ad-hoc Network Using the Fitness Function

Energy Efficient Multipath Routing Protocol for Mobile ad-hoc Network Using the Fitness Function

ABSTRACT:

Mobile Ad Hoc Network (MANET) is a collection of wireless mobile nodes that dynamically form a temporary network without the reliance on any infrastructure or central administration. Energy consumption is considered as one of the major limitations in MANET, as the mobile nodes do not possess permanent power supply and have to rely on batteries, thus reducing network lifetime as batteries get exhausted very quickly as nodes move and change their positions rapidly across MANET. The research proposed in this paper highlights this very specific problem of energy consumption in MANET by applying the Fitness Function technique to optimize the energy consumption in Ad Hoc On Demand Multipath Distance Vector (AOMDV) routing protocol. The proposed protocol is called Ad Hoc On Demand Multipath Distance Vector with the Fitness Function (FF-AOMDV). The fitness function is used to find the optimal path from the source to the destination to reduce the energy consumption in multipath routing. The performance of the proposed FF-AOMDV protocol was evaluated by using Network Simulator Version 2 (NS-2), where the performance was compared with AOMDV and Ad Hoc On Demand Multipath Routing with Life Maximization (AOMR-LM) protocols, the two most popular protocols proposed in this area. The comparison was evaluated based on energy consumption, throughput, packet delivery ratio, end-to-end delay, network lifetime and routing overhead ratio performance metrics, varying the node speed, packet size and simulation time. The results clearly demonstrate that the proposed FF-AOMDV outperformed AOMDV and AOMR-LM under majority of the network performance metrics and parameters.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

  • Sun et al. proposed an Energy-entropy Multipath Routing optimization algorithm in MANET based on GA (EMRGA). The key idea of the protocol was to find the minimal node residual energy of each route in the process of selecting a path by descending node residual energy. It can balance individual nodes battery power utilization and hence prolong the entire networks lifetime and energy variance.
  • Rajaram & Sugesh addressed the issues of energy consumption and path distance from the source to the destination in MANET. They proposed a multipath routing protocol based on AOMDV called as, Power Aware Ad-hoc On Demand Multipath Distance Vector (PAAOMDV). The proposed protocol updates the routing table with the corresponding energy of the mobile nodes. As this was a multipath protocol, it shifts the route without further overhead, delay and loss of packets. The simulation results showed that PAAOMDV performs well compared to AOMDV routing protocol after introducing energy-related fields in PAAOMDV

DISADVANTAGES OF EXISTING SYSTEM:

  • Less Packet delivery ratio
  • Low Throughput
  • High End-to-end-delay
  • More Energy consumption and Less Network lifetime.

PROPOSED SYSTEM:

  • We proposed a new multipath routing protocol called the FF-AOMDV routing protocol, which is a combination of Fitness Function and the AOMDV’s protocol. In a normal scenario, when a RREQ is broadcasted by a source node, more than one route to the destination will be found and the data packets will be forwarded through these routes without knowing the routes’ quality.
  • By implementing the proposed algorithm on the same scenario, the route selection will be totally different. When a RREQ is broadcast and received, the source node will have three (3) types of information in order to find the shortest and optimized route path with minimized energy consumption.

ADVANTAGES OF PROPOSED SYSTEM:

  • High Packet delivery ratio,.
  • Increase Throughput.
  • Low End-to-end-delay
  • Less Energy consumption and More Network lifetime.

SYSTEM ARCHITECTURE

MODULES:

  • Simulation Model
  • Fitness Function
  • FF-AOMDV

MODULE DESCRIPTIONS: 

Simulation Model:

In this simulation model, we utilized the Constant Bit Rate (CBR) as a traffic source with 36 mobile nodes that are distributed randomly in a 1500 m* 1500 m network area; the network topology may therefore, undergo random change since the nodes’ distribution and its movement are random. The transmission range of the nodes was set to 250 m, while, for each node, the initial energy level was set to 100 joules. Three different scenarios were chosen to see how they are affecting the performance of the proposed FF-AOMDV protocol. In the first scenario, we varied the packet size as (64, 128, 256, 512, 1024) bytes and kept both the node speed and simulation time fixed as (2.5 meter/second and for 50 seconds) respectively. All other network parameters are the same for all runs and for all simulated protocols. In the second scenario, we varied the node speed as (0, 2.5, 5, 7.5, 10) seconds and kept the packet size and simulation time fixed as (256 bytes and 50 seconds) respectively. Finally, in the third scenario, we varied the simulation time as (10, 20, 30, 40, 50) seconds and kept the both the node speed and packet size fixed as (2.5 meters/second and 256 bytes) respectively. 

Fitness Function:

The fitness function is an optimization technique that comes as a part of many optimization algorithms such as genetic algorithm, bee colony algorithm, firefly algorithm and particle swarm optimization algorithm. The fitness function finds the most important factor in the optimization process, which could be many factors depending on the aim of the research. In MANET, the fitness factor is usually energy, distance, delay, and bandwidth. This matches the reasons for designing any routing protocol, as they aim to enhance the network resources. In this research, the fitness function used is part of the Particle Swarm Optimization (PSO) algorithm. It was used with wireless sensor networks to optimize the alternative route in case the primary route fails. The factors that affect the choice of the optimum route are:

  • The remaining energy functions for each node
  • The distance functions of the links connecting the neighboring nodes
  • Energy consumption of the nodes
  • Communication delay of the nodes

FF-AOMDV:

In a normal scenario, when a RREQ is broadcasted by a source node, more than one route to the destination will be found and the data packets will be forwarded through these routes without knowing the routes’ quality. By implementing the proposed algorithm on the same scenario, the route selection will be totally different. When a RREQ is broadcast and received, the source node will have three (3) types of information in order to find the shortest and optimized route path with minimized energy consumption. This information includes:

  • Information about network’s each node’s energy level
  • The distance of every route
  • The energy consumed in the process of route discovery.

The route, which consumes less energy, could possibly be (a) the route that has the shortest distance; (b) the route with the highest level of energy, or (c) both. The source node will then sends the data packets via the route with highest energy level, after which it will calculate its energy consumption. Alike to other multipath routing protocols, this protocol will also initiates new route discovery process when all routes to the destination are failed. In the event when the selected route fails, the source node will then selects an alternative route from its routing table, which represents the shortest route with minimum energy consumption. The optimal route with less distance to destination will consume less energy.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram : 1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language : C#.NET
  • Tool : Visual Studio 2008

REFERENCE:

Mueen Uddin, Aqeel Taha, Raed Alsaqour, Tanzila Saba, “Energy Efficient Multipath Routing Protocol for Mobile ad-hoc Network Using the Fitness Function”,  IEEE Access, 2017.

Generating Query Facets using Knowledge Bases

Generating Query Facets using Knowledge Bases

ABSTRACT:

A query facet is a significant list of information nuggets that explains an underlying aspect of a query. Existing algorithms mine facets of a query by extracting frequent lists contained in top search results. The coverage of facets and facet items mined by this kind of methods might be limited, because only a small number of search results are used. In order to solve this problem, we propose mining query facets by using knowledge bases which contain high-quality structured data. Specifically, we first generate facets based on the properties of the entities which are contained in Freebase and correspond to the query. Second, we mine initial query facets from search results, then expanding them by finding similar entities from Freebase. Experimental results show that our proposed method can significantly improve the coverage of facet items over the state-of-the-art algorithms.

EXISTING SYSTEM:

  • Existing query facet mining algorithms mainly rely on the top search results from search engines.
  • Dou et al. first introduced the concept of query dimensions, which is the same concept as query facet discussed in this paper. They proposed QDMiner, a system that can automatically mine query facets by aggregating frequent lists contained in the results. The lists are extracted by HTML tags (like <select> and <table>), text patterns, and repeat content blocks contained in web pages.
  • Kong et al. proposed two supervised methods, namely QF-I and QF-J, to mine query facets from the results.
  • In all these existing solutions, facet items are extracted from the top search results from a search engine (e.g., top 100 search results from Bing.com). More specifically, facet items are extracted from the lists contained in the results

DISADVANTAGES OF EXISTING SYSTEM:

  • Many users are not satisfied with this kind of conventional search result pages.
  • This usually takes a lot of time and troubles the users.
  • The problem is that the coverage of facets mined using this kind of methods might be limited, because some useful words or phrases might not appear in a list within the search results used and they have no opportunity to be mined.

PROPOSED SYSTEM:

  • We propose leveraging a knowledge base as a complementary data source to improve the quality of query facets. Knowledge bases contain high quality structured information such as entities and their properties and are especially useful when the query is related to an entity.
  • We propose using both knowledge bases and search results to mine query facets in this paper. The reason why we don’t abandon search results is that search results reflect user intent and provide abundant context for facet generation and expansion.
  • Our target is to improve the recall of facet and facet items by utilizing entities and their properties contained in knowledge bases, and at the same time, make sure that the accuracy of facet items are not harmed too much. Our approach consists of two methods which are facet generation and facet expansion.
  • In facet generation, we directly use properties of entities corresponding to a query as its facet candidates. In facet expansion, we expand initial facets mined by traditional algorithms such as QDMiner to find more similar items contained in a knowledge base such as Freebase1. The facets constructed by the two methods are further merged and ranked to generate final query facets.

ADVANTAGES OF PROPOSED SYSTEM:

  • Experimental results show that our proposed method QDMKB significantly outperforms all state-of-the art methods including QDMiner, QF-I, and QF-J.
  • It yields significantly higher recall of facet items.

SYSTEM ARCHITECTURE:

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram : 1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language : NET,C#.NET
  • Tool : Visual Studio 2008
  • Database : SQL SERVER 2005

REFERENCE:

Zhengbao Jiang, Zhicheng Dou, Member, IEEE, and Ji-Rong Wen, Senior Member, IEEE, “Generating Query Facets using Knowledge Bases”, IEEE Transactions on Knowledge and Data Engineering 2017.

Filtering out Infrequent Behavior from Business Process Event

Filtering out Infrequent Behavior from Business Process Event

ABSTRACT:

In the era of “big data” one of the key challenges is to analyze large amounts of data collected in meaningful and scalable ways. The field of process mining is concerned with the analysis of data that is of a particular nature, namely data that results from the execution of business processes. The analysis of such data can be negatively influenced by the presence of outliers, which reflect infrequent behavior or “noise”. In process discovery, where the objective is to automatically extract a process model from the data, this may result in rarely travelled pathways that clutter the process model. This paper presents an automated technique to the removal of infrequent behavior from event logs. The proposed technique is evaluated in detail and it is shown that its application in conjunction with certain existing process discovery algorithms significantly improves the quality of the discovered process models and that it scales well to large datasets.

EXISTING SYSTEM:

  • The literature in the area of infrequent event log filtering is very scarce, offering simplistic techniques or approaches that require the availability of a reference process model as input to the filtering.
  • The Filter Log using Prefix-Closed Language (PCL) plugin removes events from traces to obtain a log that can be expressed via a Prefix-Closed Language
  • Other log filtering plugins are available in ProM, which however do not specifically deal with the removal of infrequent behavior.
  • In the literature, noise filtering of process event logs is only addressed by Wang et al.. The authors propose an approach which uses a reference process model to repair a log whose events are affected by inconsistent labels, i.e. labels that do not match the expected behavior of the reference model. However, this approach requires the availability of a reference model.

DISADVANTAGES OF EXISTING SYSTEM:

  • It is a challenging and time consuming task, with no guarantee on the effectiveness of the result, especially in the context of large logs exhibiting complex process behavior.
  • Given that process events are not repeated often within a trace, their relative frequency would be very low, leading to considering almost all events of a trace as outliers.

PROPOSED SYSTEM:

  • This paper deals with the challenge of discovering process models of high quality in the presence of noise in event logs, by contributing an automated technique for systematically filtering out infrequent behavior from such logs. Our filtering technique first builds an abstraction of the process behavior recorded in the log as an automaton (a directed graph).
  • This automaton captures the direct follows dependencies between event labels in the log. From this automaton, infrequent transitions are subsequently removed. Then the original log is replayed on this reduced automaton in order to identify events that no longer fit. These events are removed from the log. The technique aims at removing the maximum number of infrequent transitions in the automaton, while minimizing the number of events that are removed from the log.

ADVANTAGES OF PROPOSED SYSTEM:

  • To the best of our knowledge, this paper proposes the first effective technique for filtering out noise from process event logs.
  • The novelty of the technique rests upon the choice of modeling the infrequent log filtering problem as an automaton.
  • This approach enables the detection of infrequent process behavior at a fine grain level, which leads to the removal of individual events rather than entire traces (i.e. sequences of events) from the log, hence reducing the impact on the overall process behavior captured in the log.

SYSTEM ARCHITECTURE:

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram : 1GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language : NET,C#.NET
  • Tool : Visual Studio 2008
  • Database : SQL SERVER 2005

REFERENCE:

Raffaele Conforti, Marcello La Rosa and Arthur H.M. ter Hofstede, “Filtering out Infrequent Behavior from Business Process Event Logs”, IEEE Transactions on Knowledge and Data Engineering, 2017.

Keyword Search with Access Control over Encrypted Cloud Data

Keyword Search with Access Control over Encrypted Cloud Data

ABSTRACT:

In this paper, we study the problem of keyword search with access control over encrypted data in cloud computing. We first propose a scalable framework where user can use his attribute values and a search query to locally derive a search capability, and a file can be retrieved only when its keywords match the query and the user’s attribute values can pass the policy check. Using this framework, we propose a novel scheme called KSAC, which enables Keyword Search with Access Control over encrypted data. KSAC utilizes a recent cryptographic primitive called HPE to enforce fine-grained access control and perform multi-field query search. Meanwhile, it also supports the search capability deviation, and achieves efficient access policy update as well as keyword update without compromising data privacy. To enhance the privacy, KSAC also plants noises in the query to hide users’ access privileges. Intensive evaluations on real-world dataset are conducted to validate the applicability of the proposed scheme and demonstrate its protection for user’s access privilege.

PROJECT OUTPUT VIDEO: (Click the below link to see the project output video):

EXISTING SYSTEM:

  • Golle et al. considered conjunctive keyword search over encrypted data.
  • Shi et al. realized multi-dimensional range query over encrypted data. Shen et al. investigated the encrypted search with preference by utilizing Lagrange polynomial and secure inner-product computation.
  • Li et al. considered authorized private keyword search. It only achieved LTA-level authorization which was far coarser than user level access control, and missed the protection of the users access privacy.
  • Based on the uni-gram, Fu et al. proposed an efficient multi-keyword fuzzy ranked search scheme with improved accuracy.
  • Fu et al. found that previous keyword-based search schemes ignored the semantic information. They then developed an semantic search scheme based on the concept hierarchy and the semantic relationship between concepts in the encrypted datasets.

DISADVANTAGES OF EXISTING SYSTEM:

  • Most of existing SE schemes assume that every user can access all the shared files.
  • Such assumption does not hold in the cloud environment where users are actually granted different access permissions according to the access-control policy determined by data owners.
  • Many of proposed SE schemes require a role, such as data owner, to handle the search capability derivation for user’s interested keywords every time before search. This requirement places heavy burden on data owners and significantly compromises the system scalability. The weakness should be mitigated by allowing user to locally derive the search capability.

PROPOSED SYSTEM:

  • First, we propose a scalable framework that integrates multi-field keyword search with fine-grained access control. In the framework, every user authenticated by an authority obtains a set of keys called credential to represent his attribute values. Each file stored in the cloud is attached with an encrypted index to label the keywords and specify the access policy.
  • Every user can use his credential and a search query to locally generate a search capability, and submit it to the cloud server who then performs search and access control.
  • Finally, a user receives the data files that match his search query and allow his access.
  • Second, to enable such a framework, we make a novel use of Hierarchical Predicate Encryption (HPE), to realize the derivation of search capability. Based on HPE, we propose our scheme named KSAC.

ADVANTAGES OF PROPOSED SYSTEM:

  • This design addresses the first challenge by fully leveraging the computation power of cloud server.
  • It also solves the second challenge by dispersing the computation burden of capability generation to the users in the system.
  • It enables the service of both the keyword search and access control over multiple fields, and supports efficient update of access policy and keywords. KSAC also introduces some random values to enhance the protection of user’s access privacy.
  • To the best of our knowledge, KSAC is the first solution to simultaneously achieve the above goals.
  • Finally, we fully implement KSAC and conduct extensive evaluations to demonstrate its applicability.

SYSTEM ARCHITECTURE:

MODULES:

  • Users
  • Cloud Service Providers(CSP)
  • Third Party Auditor(TPA)
  • Dynamic Hash Table

MODULE DESCRIPTIONS: 

Users:

User’s stores a great quantity of data files in the cloud can be an individual or a organization. Cloud users (data owners), who outsource their Encrypted data in clouds. Users can be relieved of the burden of storage and computation while enjoying the storage and maintenance service by outsourcing their data into the CSP. 

Cloud Service Provider:

A cloud service provider is a third-party company offering a cloud-based platform, infrastructure, and application or storage services. Much like a homeowner would pay for a utility such as electricity or gas; companies typically have to pay only for the amount of cloud services they use, as business demands require.

Besides the pay-per-use model, cloud service providers also give companies a wide range of benefits. Businesses can take advantage of scalability and flexibility by not being limited to physical constraints of on-premises servers, the reliability of multiple data centers with multiple redundancies, customization by configuring servers to your preferences and responsive load balancing which can easily respond to changing demands. Though businesses should also evaluate security considerations of storing information in the cloud to ensure industry-recommended access and compliance management configurations and practices are enacted and met. Cloud Service Provider Manages and coordinates a number of cloud servers to offer scalable and on‐demand outsourcing data services for users.

Third Party Auditor (TPA):

TPA can verify the reliability of the cloud storage services (CSS)  credibly  and  dependably  on  behalf  of  the  users  upon  request. TPA is involved to check the integrity of the users data stored in the cloud. However, in the whole verification process, the TPA is not expected to be able to learn the actual content of the user’s data for privacy protection. We assume the TPA is credible but curious. In other words, the TPA can perform the audit reliably, but may be curious about the users data.

Dynamic Hash Table (DHT):

A hash table is a dynamic set data structure. It has three basic functions: to store data (SET/INSERT); to retrieve data (SEARCH/RETRIEVE), and to remove data that has previously been stored in the set (DELETE). In this way it is not different from other dynamic set data structure such as linked lists or trees.
The interesting about hash tables is their performance characteristics with respect to the store/retrieve/remove operations. In this regard, hash tables offer average constant time to perform any combination of the basic operations. This makes them extremely useful in many scenarios where quickly searching for an element is required, especially if multiple queries must be performed.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram :1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language : NET,C#.NET
  • Tool : Visual Studio 2008
  • Database : SQL SERVER 2005

REFERENCE:

Zhirong Shen, Member, IEEE, Jiwu Shu_, Member, IEEE, and Wei Xue, Member, IEEE, “Keyword Search with Access Control over Encrypted Cloud Data”, IEEE Sensors Journal, 2017.