Critical Factors Determining Information Security Maturity in AI Utilization: A Systematic Literature Review

Authors

  • Ammar Fauzan STMIK PGRI Arungbinang Kebumen, Indonesia
  • Imanaji Hari Sayekti STMIK PGRI Arungbinang Kebumen, Indonesia

DOI:

https://doi.org/10.59934/jaiea.v4i3.1192

Keywords:

artificial intelligence, information security maturity, critical factors

Abstract

This study aims to identify and synthesize critical factors influencing information security maturity within the context of Artificial Intelligence (AI) utilization in organizations. As AI adoption rapidly escalates across various sectors, it introduces unique and complex information security challenges, necessitating a structured approach to their management. Through a Systematic Literature Review (SLR), we will analyze relevant scientific and professional literature to extract and categorize key information security dimensions and best practices integrated into existing AI maturity models. Particular emphasis will be placed on how these critical factors encompassing technical, organizational, and human aspects directly impact an organization's ability to achieve and sustain higher levels of AI security maturity. The findings of this research are expected to provide a comprehensive understanding of the essential elements required to establish a robust information security posture in AI-driven environments. A primary contribution of this study is to delineate a clear research agenda for future investigations, alongside offering practical guidance for practitioners and decision-makers to assess and proactively enhance their AI security based on these identified determinants.

Downloads

Download data is not yet available.

References

R. Bunod, E. Augstburger, E. Brasnu, A. Labbe, and C. Baudouin, “Artificial intelligence and glaucoma: A literature review,” J. Fr. Ophtalmol., vol. 45, no. 2, pp. 216–232, 2022, doi: 10.1016/j.jfo.2021.11.002.

V. Kumar, A. R. Ashraf, and W. Nadeem, “AI-powered marketing: What, where, and how?,” Int. J. Inf. Manage., vol. 77, no. December 2023, p. 102783, 2024, doi: 10.1016/j.ijinfomgt.2024.102783.

M. Haenlein and A. Kaplan, “A Brief History of Artificial Intelligence: On The Past, Present, and Future of Artificial Intelligence,” Calif. Manage. Rev., vol. 61, no. 4, pp. 5–14, 2019, doi: 10.1177/0008125619864925.

R. Hamon, H. Junklewitz, J. Soler Garrido, and I. Sanchez, “Three Challenges to Secure AI Systems in the Context of AI Regulations,” IEEE Access, vol. 12, no. May, pp. 61022–61035, 2024, doi: 10.1109/ACCESS.2024.3391021.

H. Baniecki and P. Biecek, “Adversarial attacks and defenses in explainable artificial intelligence: A survey,” Inf. Fusion, vol. 107, no. February, 2024, doi: 10.1016/j.inffus.2024.102303.

G. G. Shayea, M. H. M. Zabil, M. A. Habeeb, Y. L. Khaleel, and A. S. Albahri, “Strategies for protection against adversarial attacks in AI models: An in-depth review,” J. Intell. Syst., vol. 34, no. 1, 2025, doi: 10.1515/jisys-2024-0277.

C. Negri-Ribalta, R. Geraud-Stewart, A. Sergeeva, and G. Lenzini, “A systematic literature review on the impact of AI models on the security of code generation,” Front. Big Data, vol. 7, 2024, doi: 10.3389/fdata.2024.1386720.

A. Kumar and S. Kumar, “Harnessing Artificial Intelligence for Sustainable Development: Opportunities, Challenges, and Future Directions,” Int. Res. J. Eng. Technol., vol. 2, no. 2, pp. 1–22, 2024, [Online]. Available: www.irjet.net.

M. Raparthi, S. B. Dodda, and S. Maruthi, “Examining the use of Artificial Intelligence to Enhance Security Measures in Computer Hardware, including the Detection of Hardware-based Vulnerabilities and Attacks.,” Eur. Econ. Lett., vol. 10, no. 1, pp. 60–68, 2020, doi: 10.52783/eel.v10i1.991.

P. Akbarighatar, I. Pappas, and P. Vassilakopoulou, “A sociotechnical perspective for responsible AI maturity models: Findings from a mixed-method literature review,” Int. J. Inf. Manag. Data Insights, vol. 3, no. 2, p. 100193, 2023, doi: 10.1016/j.jjimei.2023.100193.

M. Mohamad, J. P. Steghöfer, E. Knauss, and R. Scandariato, “Managing security evidence in safety-critical organizations,” J. Syst. Softw., vol. 214, no. April 2023, p. 112082, 2024, doi: 10.1016/j.jss.2024.112082.

L. J. Tveita and E. Hustad, “Benefits and Challenges of Artificial Intelligence in Public sector: A Literature Review,” Procedia Comput. Sci., vol. 256, no. 1877, pp. 222–229, 2025, doi: 10.1016/j.procs.2025.02.115.

S. Russell and P. Norvig, Artifical Intelligence A Modern Approach (Third Edition). Pearson, 2010.

O. B. Akinnagbe, “The Future of Artificial Intelligence: Trends and Predictions,” Mikailalsys J. Adv. Eng. Int., vol. 1, no. 3, pp. 249–261, 2024.

J. Füller, K. Hutter, J. Wahl, V. Bilgram, and Z. Tekic, “How AI Revolutionizes Innovation Management – Perceptions and Implementation Preferences of AI-based Innovators,” Technol. Forecast. Soc. Change, vol. 178, no. April 2021, p. 121598, 2022, doi: 10.1016/j.techfore.2022.121598.

J. R. Machireddy, “Leveraging AI and Machine Learning for Data-Driven Business Strategy : A Comprehensive Framework for Analytics Integration,” African J. Artif. Intell. Sustain. Dev., vol. 1, no. 2, pp. 127–150, 2021.

Y. Dwivedi et al., “Artificial Intelligence (AI): Multidisciplinary Perspectives on Emerging Challenges, Opportunities, and Agenda for Research, Practice and Policy,” Int. J. Inf. Manage., vol. 57, no. April, 2021, doi: https://doi.org/10.1016/j.ijinfomgt.2019.08.002.

C. P. Pfleeger, S. L. Pfleeger, and J. Margulies, Security in Computing (Fifth Edition). Westford: Prentice Hall, 2015.

I. Y. Tyukin, D. J. Higham, and A. N. Gorban, “On Adversarial Examples and Stealth Attacks in Artificial Intelligence Systems,” 2020, doi: 10.1109/IJCNN48605.2020.9207472.

Y. Liu et al., “Trojaning Attack on Neural Networks,” 25th Annu. Netw. Distrib. Syst. Secur. Symp. NDSS 2018, no. February, 2018, doi: 10.14722/ndss.2018.23291.

F. Tramer, F. Zhang, A. Juel, M. Reiter, and T. Ristenpart, “Stealing Machine Learning Models via Prediction APIs,” in 25th USENIX Security Symposium, 2016, no. 3, pp. 601–618.

J. Burrell, “How the machine ‘thinks’: Understanding opacity in machine learning algorithms,” Big Data Soc., vol. 3, no. 1, pp. 1–12, 2016, doi: 10.1177/2053951715622512.

R. B. Sadiq, N. Safie, A. H. Abd Rahman, and S. Goudarzi, “Artificial intelligence maturity model: A systematic literature review,” PeerJ Comput. Sci., vol. 7, pp. 1–27, 2021, doi: 10.7717/peerj-cs.661.

CMMI, “CMMI for Development, Version 1.3,” 2010.

A. A. Tubis, “Digital Maturity Assessment Model for the Organizational and Process Dimensions,” Sustain., vol. 15, no. 20, 2023, doi: 10.3390/su152015122.

ISO/IEC, “Information Security, Cybersecurity and Privacy Protection-Information Security Management Systems-Requirements,” 2022.

B. Kitchenham and S. M. Charters, “Guidelines for performing systematic literature reviews in software engineering,” 2007.

D. Moher et al., “Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement,” PLoS Med., vol. 6, no. 7, 2009, doi: 10.1371/journal.pmed.1000097.

D. Tranfield, D. Denyer, and P. Smart, “Towards a Methodology for Developing Evidence-Informed Management Knowledge by Means of Systematic Review,” Br. J. Manag., vol. 14, pp. 207–222, 2003, doi: 10.1016/j.intman.2013.03.011.

Y. A. Al-Khassawneh, “A Review of Artificial Intelligence in Security and Privacy: Research Advances, Applications, Opportunities, and Challenges,” Indones. J. Sci. Technol., vol. 8, no. 1, pp. 79–96, 2023, doi: 10.17509/ijost.v8i1.52709.

R. Zhang, H. Li, A. Chen, Z. Liu, and Y. C. Lee, “AI Privacy in Context: A Comparative Study of Public and Institutional Discourse on Conversational AI Privacy in the US and Chinese Social Media,” Soc. Media Soc., vol. 10, no. 4, 2024, doi: 10.1177/20563051241290845.

R. Kaur, D. Gabrijelčič, and T. Klobučar, “Artificial intelligence for cybersecurity: Literature review and future research directions,” Inf. Fusion, vol. 97, no. September, pp. 1–29, 2023, doi: 10.1016/j.inffus.2023.101804.

M. Sinan, M. Shahin, and I. Gondal, “Implementing and integrating security controls: A practitioners’ perspective,” Comput. Secur., vol. 156, no. August 2024, p. 104516, 2025, doi: 10.1016/j.cose.2025.104516.

S. R. Thoom, “Lessons from AI in finance : Governance and compliance in practice,” Int. J. Sci. Res. Arch., vol. 14, no. January, pp. 1387–1395, 2025.

E. Zaidan and I. A. Ibrahim, “AI Governance in a Complex and Rapidly Changing Regulatory Landscape: A Global Perspective,” Humanit. Soc. Sci. Commun., vol. 11, no. 1, pp. 1–18, 2024, doi: 10.1057/s41599-024-03560-x.

T. Birkstedt, M. Minkkinen, A. Tandon, and M. Mäntymäki, “AI Governance: Themes, Knowledge Gaps and Future Agendas,” Internet Res., vol. 33, no. 7, pp. 133–167, 2023, doi: 10.1108/INTR-01-2022-0042.

J. Park and T. S. Kim, “A framework to improve the compliance guideline for critical ICT infrastructure security,” J. Open Innov. Technol. Mark. Complex., vol. 11, no. 2, p. 100547, 2025, doi: 10.1016/j.joitmc.2025.100547.

M. Schmitt, “Securing the digital world: Protecting smart infrastructures and digital industries with artificial intelligence (AI)-enabled malware and intrusion detection,” J. Ind. Inf. Integr., vol. 36, no. September, p. 100520, 2023, doi: 10.1016/j.jii.2023.100520.

Y. Hu and H. K. Min, “Information transparency, privacy concerns, and customers’ behavioral intentions regarding AI-powered hospitality robots: A situational awareness perspective,” J. Hosp. Tour. Manag., vol. 63, no. April, pp. 177–184, 2025, doi: 10.1016/j.jhtm.2025.04.003.

Downloads

Published

2025-06-15

How to Cite

Fauzan, A., & Imanaji Hari Sayekti. (2025). Critical Factors Determining Information Security Maturity in AI Utilization: A Systematic Literature Review. Journal of Artificial Intelligence and Engineering Applications (JAIEA), 4(3), 2460–2467. https://doi.org/10.59934/jaiea.v4i3.1192

Issue

Section

Articles