How can AI and ML enhance the detection and prevention of security threats in chatbot systems, and what are their strengths, limitations, and future potential?
2025 (English)Independent thesis Basic level (degree of Bachelor), 20 credits / 30 HE credits
Student thesis
Abstract [en]
The increasing integration of artificial intelligence (AI) and machine learning (ML) in chatbot systems has indeed revolutionised digital communications across all industries. However, this also introduces several grave security concerns such as adversarial attacks, social engineering exploits, and privacy issues. This study examines how AI and ML can improve the detection and prevention of security threats in chatbot systems, evaluating their strengths, limitations, and future potential. This paper presents the results from a systematic literature review of 41 peer-reviewed documents published between 2017 and 2025. Thematic analysis was applied to identify and categorise threats and AI/ML-driven security mechanisms. The results are structured into three main themes: security threats in chatbot systems, AI/ML-driven mitigation techniques and their overall effectiveness. The review found that AI-based anomaly detection, toxic content filtering of adversarial inputs, and AI-based intrusion detection are actually going to considerably enhance the chatbot's resilience against many different threats. Technologies like large language models (LLMs), Retrieval-Augmented Generation (RAG), and Explainable AI (XAI) further enhance threat context analysis and analyst trust. However, such advancements are still perceived to fall short for reasons such as transparency, universality across domains, and the resultant ethics of AI-driven security. The study shows that although AI and ML have immense potential in transforming chatbot security, they must be introduced with validation that is thorough, privacy-oriented, and incorporates an ethical approach. Future work should involve empirical benchmarking of AI models, incorporate robust encryption and consent mechanisms, and also engage in interdisciplinary research that considers user behaviour and trust in AI systems.
Place, publisher, year, edition, pages
2025. , p. ii, 74
Keywords [en]
chatbot, threat detection, AI, artificial intelligence, ML, machine learning, security, cybersecurity
National Category
Information Systems
Identifiers
URN: urn:nbn:se:his:diva-25430OAI: oai:DiVA.org:his-25430DiVA, id: diva2:1981457
Subject / course
Informationsteknologi
Educational program
Network and Systems Administration
Supervisors
Examiners
2025-07-042025-07-042025-07-04Bibliographically approved