top of page

LLM Scalability Risk for Agentic-AI and Model Supply Chain Security Published in the Journal of Computer Information Systems (2026)

  • Keywords: AI Governance, Agentic-AI, LSRI, Model Supply Chain Security, Cybersecurity, Dual-use AI.

Overview: As Large Language Models (LLMs) transition from static chatbots to autonomous Agentic-AI, they introduce unprecedented scalability risks and security vulnerabilities. This paper provides a comprehensive analysis of the GenAI-driven cybersecurity landscape, examining both offensive (AI-driven malware, social engineering) and defensive (automated threat detection) perspectives.

Key Contributions:

  • LLM Scalability Risk Index (LSRI): A new parametric framework designed to stress-test and quantify operational risks when deploying LLMs in security-critical environments.

  • Model Supply Chain Framework: A proposed methodology for establishing a verifiable "root of trust" across the entire model lifecycle to ensure integrity and safety.

  • Defense Synthesis: An analysis of modern defensive strategies from platforms like Google Play Protect and Microsoft Security Copilot, culminating in a governance roadmap for secure, large-scale AI deployment.

Every website has a story, and your visitors want to hear yours. This space is a great opportunity to give a full background on who you are, what your team does and what your site has to offer. Double click on the text box to start editing your content and make sure to add all the relevant details you want site visitors to know.

If you’re a business, talk about how you started and share your professional journey. Explain your core values, your commitment to customers and how you stand out from the crowd. Add a photo, gallery or video for even more engagement.

bottom of page