top of page

Create Your First Project

Start adding your projects to your portfolio. Click on "Manage Projects" to get started

LLMs & Platform Integrity

LLMs & Platform Integrity

Cutting-edge work in GenAI, LLMs, and platform defense
Subtopics:

Trust & Safety for app ecosystems (App Store, Play, Plugins)

LLM abuse detection (scams, malware, fake reviews)

Compliance enforcement with AI (EU AI Act, NIST RMF)

RAG systems, LLM trust layers, moderation pipelines

Unlocking Trust in the AI Era: Pioneering Frameworks for Secure Large Language Models and Robust Digital Ecosystems.


Subtopics:

Trust & Safety for app ecosystems (App Store, Play, Plugins)

LLM abuse detection (scams, malware, fake reviews)

Compliance enforcement with AI (EU AI Act, NIST RMF)

RAG systems, LLM trust layers, moderation pipelines

My Contribution: I am at the forefront of defining and addressing the critical challenges posed by Large Language Models (LLMs) and Generative AI to platform integrity, cybersecurity, and privacy. My work provides comprehensive surveys, strategic roadmaps, and actionable blueprints to mitigate dual-use risks, combat AI-generated threats, and ensure responsible innovation in the digital landscape. I focus on developing a secure and trustworthy AI environment for critical applications from healthcare diagnostics to digital platforms.

Publications & Patents:

Risks & Benefits of LLMs & GenAI for Platform Integrity, Healthcare Diagnostics, Cybersecurity, Privacy & AI Safety: A Comprehensive Survey, Roadmap & Implementation Blueprint

K. Ahi
arXiv preprint arXiv:2506.12088
2025
Cited by: (New publication, citations building)
[DOI Placeholder]
[Link Placeholder]
Highlight: This paper provides a foundational survey and strategic roadmap for navigating the complex risks and benefits of LLMs and Generative AI, crucial for securing future digital platforms and promoting AI safety.
Large Language Models (LLMs) and Generative AI in Cybersecurity and Privacy: A Survey of Dual-Use Risks, AI-Generated Malware, Explainability, and Defensive Strategies

K. Ahi, S. Valizadeh
IEEE
2025
Cited by: (New publication, citations building)
[DOI Placeholder]
[Link Placeholder]
Highlight: This work rigorously analyzes the escalating dual-use risks of LLMs in cybersecurity, offering insights into AI-generated malware and proposing advanced defensive strategies for robust digital security.
LLMs and Generative AI for Platform Integrity: A Survey and Strategy Roadmap for Automating Review and Moderation, Detecting Abuse and Fraud, Enforcing Compliance, and…

K. Ahi
The 6th Silicon Valley Cybersecurity Conference (SVCC) supported by IEEE and…
2025
Cited by: (New publication, citations building)
[DOI Placeholder]
[Link Placeholder]
Highlight: This research delivers a strategic roadmap for leveraging LLMs and Generative AI to enhance platform integrity through automated content review, fraud detection, and compliance enforcement.

bottom of page