Detecting Malicious Social Bots Based on Click stream Sequences

Detecting Malicious Social Bots Basedon Clickstream Sequences

ABSTRACT:

With the significant increase in the volume, velocity, and variety of user data (e.g., usergenerateddata) in online social networks, there have been attempted to design new ways of collecting andanalyzing such big data. For example, social bots have been used to perform automated analytical servicesand provide users with improved quality of service. However, malicious social bots have also been used todisseminate false information (e.g., fake news), and this can result in real-world consequences. Therefore,detecting and removing malicious social bots in online social networks is crucial. The most existing detectionmethods of malicious social bots analyze the quantitative features of their behavior. These features are easilyimitated by social bots; thereby resulting in low accuracy of the analysis. A novel method of detectingmalicious social bots, including both features selection based on the transition probability of clickstreamsequences and semi-supervised clustering, is presented in this paper. This method not only analyzes transitionprobability of user behavior clickstreams but also considers the time feature of behavior. Findings from ourexperiments on real online social network platforms demonstrate that the detection accuracy for differenttypes of malicious social bots by the detection method of malicious social bots based on transition probabilityof user behavior clickstreams increases by an average of 12.8%, in comparison to the detection method basedon quantitative analysis of user behavior.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:

 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram : 1 GB

 

SOFTWARE REQUIREMENTS:

 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : Netbeans 7.2.1
  • Database : MYSQL

 

REFERENCE:

PEINING SHI1,2, ZHIYONG ZHANG 1,2, (Senior Member, IEEE), ANDKIM-KWANG RAYMOND CHOO 3, (Senior Member, IEEE), “Detecting Malicious Social Bots Basedon Clickstream Sequences”,  IEEE Access, 2019

Delegated Authorization Framework for HER Services using Attribute Based Encryption

Delegated Authorization Framework for HER Services using Attribute Based Encryption

ABSTRACT:

Medical organizations find it challenging to adopt cloud-based Electronic Health Records (EHR) services due to the risk ofdata breaches and the resulting compromise of patient data. Existing authorization models follow a patient-centric approach for HER management, where the responsibility of authorizing data access is handled at the patients end. This creates a significant overhead forthe patient who must authorize every access of their health record. This is not practical given that multiple personnel are typicallyinvolved in providing care and that the patient may not always be in a state to provide this authorization. Hence there is a need todevelop a proper authorization delegation mechanism for safe, secure and easy to use cloud-based EHR Service management. Wepresent a novel, centralized, attribute-based authorization mechanism that uses Attribute Based Encryption (ABE) and allows fordelegated secure access of patient records. This mechanism transfers the service management overhead from the patient to themedical organization and allows easy delegation of cloud-based EHRs access authority to medical providers.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:

 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram : 1 GB

 

SOFTWARE REQUIREMENTS:

 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : Netbeans 7.2.1
  • Database : MYSQL

 

REFERENCE:

Maithilee Joshi, Karuna P. Joshi and Tim Finin, “Delegated Authorization Framework for EHR Services using Attribute Based Encryption”, IEEE Transactions on Services Computing, 2019.

Dating with Scambots: Understanding the Ecosystem of Fraudulent Dating Applications

Dating with Scambots: Understanding the Ecosystem of Fraudulent Dating Applications

ABSTRACT:

In this work, we are focusing on a new and yet uncovered way for malicious apps to gain profit. They claim to be dating apps. However, their sole purpose is to lure users into purchasing premium/VIP services to start conversations with other (likely fake female) accounts in the app. We call these apps as fraudulent dating apps.

This paper performs a systematic study to understand the whole ecosystem of fraudulent dating apps. Specifically, we have proposed a three-phase method to detect them and subsequently comprehend their characteristics via analyzing the existing account profiles. Our observation reveals that most of the accounts are not managed by real persons, but by chatbots based on predefined conversation templates. We also analyze the business model of these apps and reveal that multiple parties are actually involved in the ecosystem, including producers who develop apps, publishers who publish apps to gain profit, and the distribution network that is responsible for distributing apps to end users. Finally, we analyze the impact of them to users (i.e., victims) and estimate the overall revenue. Our work is the first systematic study on fraudulent dating apps, and the results demonstrate the urge for a solution to protect users.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:

 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram : 1 GB

 

SOFTWARE REQUIREMENTS:

 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : Netbeans 7.2.1
  • Database : MYSQL

 

REFERENCE:

Yangyu Hu, Haoyu Wang, Member, IEEE, Yajin Zhou, Member, IEEE, Yao Guo, Member, IEEE,Li Li, Member, IEEE, Bingxuan Luo and Fangren Xu, “Dating with Scambots: Understanding theEcosystem of Fraudulent Dating Applications”, IEEE Transactions on Dependable and Secure Computing, 2019.

P-MOD: Secure Privilege-Based Multilevel Organizational Data-Sharing in Cloud Computing

P-MOD: Secure Privilege-Based Multilevel Organizational Data-Sharing in Cloud Computing

ABSTRACT:

Cloud computing has changed the way enterprises store, access and share data. Big data sets are constantly being uploaded to the cloud and shared within a hierarchy of many different individuals with different access privileges. With more data storage needs turning over to the cloud, finding a secure and efficient data access structure has become a major research issue. In this paper, a Privilege-based Multilevel Organizational Data-sharing scheme (P-MOD) is proposed that incorporates a privilege-based access structure into an attribute-based encryption mechanism to handle the management and sharing of big data sets. Our proposed privilege-based access structure helps reduce the complexity of defining hierarchies as the number of users grows, which makes managing healthcare records using mobile healthcare devices feasible. It can also facilitate organizations in applying big data analytics to understand populations in a holistic way. Security analysis shows that P-MOD is secure against adaptively chosen plaintext attack assuming the DBDH assumption holds. The comprehensive performance and simulation analyses using the real U.S. Census Income dataset demonstrate that P-MOD is more efficient in computational complexity and storage space than the existing schemes.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram : 1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : Netbeans 7.2.1
  • Database : MYSQL

REFERENCE:

Ehab Zaghloul, Kai Zhou and Jian Ren, “P-MOD: Secure Privilege-Based Multilevel Organizational Data-Sharing in Cloud Computing”, IEEE Transactions on Big Data, 2019.

PersoNet: Friend Recommendation System Based on Big-Five Personality Traits and Hybrid Filtering

PersoNet: Friend Recommendation System Based on Big-Five Personality Traits and Hybrid Filtering

ABSTRACT:

Friend recommendation system (FRS) is an essential part of any social network system. With the popularity of social network sites, many FRSs have been proposed in the past few years. However, most of them are homophily based systems, homophily is the propensity to associate and bond with similar others. In other words, these systems will recommend people that you share common features with them as friends. Homophily based FRS is accurate when the common feature is a physical or social feature, such as age, race, location, job, or lifestyle. However, it is not the case with personality types. Having a given personality type does not necessarily mean that you are compatible with people that have the same personality type. Therefore, in this paper, we present and evaluate an FRS based on the big-five personality traits model and hybridfiltering, in which the friend recommended process, is based on personality traits and users’ harmony rating. To validate the proposed system’s accuracy, a personality-based social network site that uses the proposed FRS named PersoNet is implemented. Users’ rating results show that PersoNet performs better than collaborative filtering (CF)-based FRS in terms of precision and recall.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram : 1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : Netbeans 7.2.1
  • Database : MYSQL

REFERENCE:

Huansheng Ning , Senior Member, IEEE, Sahraoui Dhelim , and Nyothiri Aung, “PersoNet: Friend Recommendation System Based on Big-Five Personality Traits and Hybrid Filtering”, IEEE Transactions on Computational Social Systems, Volume: 6 , Issue: 3 , June 2019.

Online Public Shaming on Twitter: Detection, Analysis, and Mitigation

Online Public Shaming on Twitter: Detection, Analysis, and Mitigation

ABSTRACT:

Public shaming in online social networks and related online public forums like Twitter has been increasing in recent years. These events are known to have a devastating impact on the victim’s social, political, and financial life. Notwithstanding its known ill effects, little has been done in popular online social media to remedy this, often by the excuse of large volume and diversity of such comments and, therefore, unfeasible number of human moderators required to achieve the task. In this paper, we automate the task of public shaming detection in Twitter from the perspective of victims and explore primarily two aspects, namely, events and shamers. Shaming tweets are categorized into six types: abusive, comparison, passing judgment, religious/ethnic, sarcasm/joke, and what aboutery, and each tweet is classified into one of these types or as nonshaming. It is observed that out of all the participating users who post comments in a particular shaming event, majority of them are likely to shame the victim. Interestingly, it is also the shamers whose follower counts increase faster than that of the nonshamers in Twitter. Finally, based on categorization and classification of shaming tweets, a web application called BlockShame has been designed and deployed for on-the-fly muting/blocking of shamers attacking a victim on the Twitter.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram : 1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : Netbeans 7.2.1
  • Database : MYSQL

REFERENCE:

Rajesh Basak, Shamik Sural , Senior Member, IEEE, Niloy Ganguly, and Soumya K. Ghosh, Member, IEEE, “Online Public Shaming on Twitter: Detection, Analysis, and Mitigation”, IEEE Transactions on Computational Social Systems, Volume: 6 , Issue: 2 , April 2019.

Normalization of Duplicate Records from Multiple Sources

Normalization of Duplicate Records from Multiple Sources

ABSTRACT:

Data consolidation is a challenging issue in data integration. The usefulness of data increases when it is linked and fused with other data from numerous (Web) sources. The promise of Big Data hinges upon addressing several big data integration challenges, such as record linkage at scale, real-time data fusion, and integrating Deep Web. Although much work has been conducted on these problems, there is limited work on creating a uniform, standard record from a group of records corresponding to the same real-world entity. We refer to this task as record normalization. Such a record representation, coined normalized record, is important for both front-end and back-end applications. In this paper, we formalize the record normalization problem, present in-depth analysis of normalization granularity levels (e.g., record, field, and value-component) and of normalization forms (e.g., typical versus complete).We propose a comprehensive framework for computing the normalized record. The proposed framework includes a suit of record normalization methods, from naive ones, which use only the information gathered from records themselves, to complex strategies, which globally mine a group of duplicate records before selecting a value for an attribute of a normalized record. We conducted extensive empirical studies with all the proposed methods. We indicate the weaknesses and strengths of each of them and recommend the ones to be used in practice.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram : 1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : Netbeans 7.2.1
  • Database : MYSQL

REFERENCE:

Yongquan Dong, Eduard C. Dragut, Member, IEEE, and Weiyi Meng, Senior Member, IEEE, “Normalization of Duplicate Records from Multiple Sources”, IEEE Transactions on Knowledge and Data Engineering, Volume: 31 , Issue: 4 , April 1 2019.

Minimizing Influence of Rumors by Blockers on Social Networks: Algorithms and Analysis

Minimizing Influence of Rumors by Blockers on Social Networks: Algorithms and Analysis

ABSTRACT:

Online social networks such as Facebook, Twitter and Wechat have become major social tools. The users can not only keep in touch with family and friends, but also send and share the instant information. However, in some practical scenarios, we need to take effective measures to control the negative information spreading, e.g, rumors spread over the networks. In this paper, we first propose the Minimizing Influence of Rumors (MIR) problem, i.e., selecting a blocker set B with k nodes such that the users’ total activation probability by rumor source set S is minimized. Then we employ the classical Independent Cascade (IC) model as information diffusion model. Based on the IC model, we prove the objective function is monotone decreasing and non-sub modular. To address the MIR problem effectively, we propose a two-stages method Generating Candidate Set &Selecting Blockers (GCSSB) for the general networks. Furthermore, we also study the MIR problem on the tree network and propose a dynamic programming guaranteeing the optimal solution. Finally, we evaluate proposed algorithms by simulations on synthetic and real-life social networks, respectively. Experimental results show our algorithms are superior to the comparative heuristic approaches such as Out-Degree (OD), Betweenness Centrality (BC) and PageRank (PR).

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram : 1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : Netbeans 7.2.1
  • Database : MYSQL

REFERENCE:

Ruidong Yan, Deying Li, Weili Wu, Ding-Zhu Du and Yongcai W, “Minimizing Influence of Rumors by Blockers on Social Networks: Algorithms and Analysis”, IEEE Transactions on Network Science and Engineering, 2019.

Memory Leakage-Resilient Dynamic and Verifiable Multi-keyword Ranked Search on Encrypted Smart Body Sensor Network Data

Memory Leakage-Resilient Dynamic and Verifiable Multi-keyword Ranked Search on Encrypted Smart Body Sensor Network Data

ABSTRACT:

Outsourcing of encrypted smart body sensor network data to the edge or the cloud is now an entrenched practice within organizations, for example to reduce cost and enhance productivity (to some extent), while ensuring data is not accessible to misbehaving cloud service providers. Although there are a large number of searchable symmetric encryption schemes designed to support searching operations on encrypted data, a fully functional and memory leakage-resilient scheme for smart body sensor network data is lacking. This security property is highly desirable as the resource-sharing environment may be prone to various kinds of memory leakage. In this paper, a memory leakage-resilient dynamic and verifiable multi-keyword ranked search scheme (MLR-DVMRS) on encrypted smart body sensor network data is proposed. As each sensor device has inherent characteristics to identify itself, this property can be used to authenticate the device. The proposed scheme utilizes physically unclonable functions (PUFs) and fuzzy extractors to achieve memory leakage-resilience. Meanwhile the vector space model, TF-IDF measure method, and order-preserving encryption (OPE), are used to achieve dynamic and multi-keyword ranked search functionalities. A formal security analysis is given to prove the security of MLR-DVMRS. Besides the comprehensive functionalities of MLR-DVMRS, experimental results demonstrate that the efficiency of MLR-DVMRS is superior to MRSE (a multi-keyword ranked search over encrypted cloud data scheme) for large data collection.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS: 

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram : 1 GB

SOFTWARE REQUIREMENTS: 

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : Netbeans 7.2.1
  • Database : MYSQL

REFERENCE:

Lanxiang Chen ; Zhenchao Chena ; Kim-Kwang Raymond Choo ; Chin-Chen Chang ; Hung-Min Sun, “Memory Leakage-Resilient Dynamic and Verifiable Multi-keyword Ranked Search on Encrypted Smart Body Sensor Network Data”, IEEE Sensors Journal, 2019.

Predicting Cyberbullying on Social Media in the Big Data Era Using Machine Learning Algorithms: Review of Literature and Open Challenges

Predicting Cyberbullying on Social Media in the Big Data Era Using Machine Learning Algorithms: Review of Literature and Open Challenges

ABSTRACT:

Prior to the innovation of information communication technologies (ICT), social interactions evolved within small cultural boundaries such as geo spatial locations. The recent developments of communication technologies have considerably transcended the temporal and spatial limitations of traditional communications. These social technologies have created a revolution in user-generated information, online human networks, and rich human behavior-related data. However, the misuse of social technologies such as social media (SM) platforms, has introduced a new form of aggression and violence that occurs exclusively online. A new means of demonstrating aggressive behavior in SM websites are highlighted in this paper. The motivations for the construction of prediction models to fight aggressive behavior in SM are also outlined. We comprehensively review cyberbullying prediction models and identify the main issues related to the construction of cyberbullying prediction models in SM. This paper provides insights on the overall process for cyberbullying detection and most importantly overviews the methodology. Though data collection and feature engineering process has been elaborated, yet most of the emphasis is on feature selection algorithms and then using various machine learning algorithms for prediction of cyberbullying behaviors. Finally, the issues and challenges have been highlighted as well, which present new research directions for researchers to explore.

SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:

  • System : Pentium Dual Core.
  • Hard Disk : 120 GB.
  • Monitor : 15’’ LED
  • Input Devices : Keyboard, Mouse
  • Ram : 1 GB

SOFTWARE REQUIREMENTS:

  • Operating system : Windows 7.
  • Coding Language :
  • Tool : Netbeans 7.2.1
  • Database : MYSQL

REFERENCE:

MOHAMMED ALI AL-GARADI1, MOHAMMAD RASHID HUSSAIN2, NAWSHER KHAN2,GHULAM MURTAZA1,3, HENRY FRIDAY NWEKE 1, IHSAN ALI 1, GHULAM MUJTABA1,3,HARUNA CHIROMA 4, HASAN ALI KHATTAK 5, AND ABDULLAH GANI, “Predicting Cyberbullying on Social Media in theBig Data Era Using Machine Learning Algorithms:Review of Literature and Open Challenges”, IEEE Access ( Volume: 7 ), 2019.