Publications
Impactful AI research to cybersecurity field
Producing innovative work that’s pushing boundaries, tackling complex security challenges and driving technological innovations in AI, machine learning (ML) and large language models.
Read moreSpam No More: A Cross-Model Analysis of Machine Learning Techniques and Large Language Model Efficacies
This paper delves into the critical issue of spam detection, comparing traditional machine learning models like Support Vector Machines (SVM), Logistic Regression, and Random Forest with advanced LLMs such as ChatGPT 3.5, Perplexity AI, and our custom fine-tuned model, TextGPT. Our findings highlight the exceptional potential of LLMs in enhancing spam detection mechanisms, marking a significant step forward in cybersecurity...
Read moreThe Dark Side of AI: Large Language Models as Tools for Cyber Attacks on Vehicle Systems
In a world where autonomous vehicles (AVs) promise unparalleled safety and efficiency, a hidden threat looms large. This groundbreaking paper delves into the dark side of artificial intelligence, revealing how Large Language Models (LLMs) can be weaponized to launch sophisticated cyberattacks on vehicle systems. From manipulating Controller Area Networks (CAN) to exploiting Bluetooth vulnerabilities and hacking Key Fobs, the research uncovers the chilling reality of AI-driven threats.
Read moreEnhancing Phishing Detection with AI: A Novel Dataset and Comprehensive Analysis Using Machine Learning and Large Language Models
Dive into the cutting-edge of phishing detection with our comprehensive study utilizing AI. We introduce a robust dataset, the largest of its kind, equipped with over 19,000 phishing emails enriched with sophisticated social engineering tactics and impersonation strategies. Our detailed analysis spans classical machine learning models and groundbreaking explorations using large language models, providing crucial insights and tools for cybersecurity experts to sharpen their defenses against evolving cyber threats.
Read moreSecurity and Privacy in E-Health Systems: A Review of AI and Machine Learning Techniques
Security and Privacy in E-Health Systems: A Review of AI and Machine Learning Techniques is a comprehensive and timely exploration of how cutting-edge AI technologies are transforming cybersecurity in e-health systems. This paper delves into the critical role AI and machine learning play in protecting sensitive healthcare data, offering insights into advanced threat detection, anomaly recognition, and predictive analytics. With a focus on real-time threat response and privacy-preserving AI techniques, it also highlights the future research opportunities like quantum-resistant encryption and blockchain integration. This review is essential reading for anyone invested in the future of secure, AI-driven healthcare systems.
Read moreIs Generative AI the Next Tactical Cyber Weapon For Threat Actors? Unforeseen Implications of AI Generated Cyber Attacks
This study details various techniques like the switch method and character play method, which can be exploited by cybercriminals to generate and automate cyber attacks. Through a series of controlled experiments, the paper demonstrates how these models can be manipulated to bypass ethical and privacy safeguards to effectively generate cyber attacks such as social engineering, malicious code, payload generation, and spyware. By testing these AI generated attacks on live systems, the study assesses their effectiveness and the vulnerabilities they exploit, offering a practical perspective on the risks AI poses to critical infrastructure.
Read moreUnveiling Cyber Threats: A Comprehensive Analysis of Connecticut Data Breaches
In this paper, we conduct a thorough empirical analysis of Connecticut's data breaches in 2022, analyzing the data provided by The Office of the Attorney General, Connecticut. Our methodology involves a detailed examination of the breach records, focusing on the types of companies affected, methodologies of the attacks, and specific information compromised. We applied statistical analysis techniques to uncover patterns and trends within the data. Our investigation reveals a significant vulnerability in smaller businesses, with the healthcare and financial sectors facing the most severe challenges. Ransomware and phishing emerge as the most frequent attack methods, often leading to the compromise of sensitive personal data. Additionally, we noted that smaller businesses were not only more frequently targeted but also took longer to detect and report breaches, exacerbating the impact.
Read moreCan AI Keep You Safe? A Study of Large Language Model for Phishing Detection
The research investigates the effectiveness of Large Language Models (LLMs) such as GPT-3.5, GPT-4, and a tailored ChatGPT model in identifying phishing emails. Through the analysis of a varied dataset containing both phishing and legitimate emails, the study highlights the outstanding performance of advanced LLMs in precisely detecting phishing attempts. These results underscore the capability of LLMs to enhance cybersecurity defenses and protect against harmful email threats.
Read more