Publication Type : Conference Paper
Publisher : IEEE
Source : 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT)
Url : https://doi.org/10.1109/icccnt61001.2024.10724705
Campus : Bengaluru
School : School of Computing
Year : 2024
Abstract : In order to create better shellcode for offensive cybersecurity, this study investigates the use of large language models (LLMs) such as Mistral and Llama. It focuses on LLM optimizations to improve shellcode accuracy and efficiency with the goal of quickly locating and taking advantage of software vulnerabilities. To optimize shellcode generation, several model parameters are tuned through controlled experiments, with the BLEU score serving as the primary criterion for impartial evaluation. This study achieved a BLEU-1 score of 0.8506, highlighting the effectiveness of the optimization and finetuning strategies. The study explores the subtle aspects of LLMs that influence the creation of shellcode, such as their capacity for learning, their ability to adjust to the peculiarities of cybersecurity, and their ability to replicate complex patterns into effective shellcode. This project intends to expand knowledge of using LLMs for shellcode development by submitting significant experimentation and research, providing insights into more efficient vulnerability exploitation strategies in cybersecurity.
Cite this Research Publication : Kanderi Johith Kumar, Kuthati Shreya, Lasya Priya Divakarla, Priyanka C Nair, Nalini Sampath, Interpretable AI Insights in Fake News Detection: A Comparative Analysis of CNN and LSTM, 2024 15th International Conference on Computing Communication and Networking Technologies (ICCCNT), IEEE, 2024, https://doi.org/10.1109/icccnt61001.2024.10724705