#554 Securing the AI Era: Alex Schlager on Why AI Agents Are the New Attack Surface The CTO Show With Mehmet

In this episode of The CTO Show with Mehmet, I’m joined by Alex Schlager, Founder and CEO of AIceberg, a company operating at the intersection of AI, cybersecurity, and explainability.

We dive deep into why AI agents fundamentally change enterprise risk, how shadow AI is spreading across organizations, and why monitoring black-box models with other black boxes is a dangerous mistake.

Alex explains how explainable machine learning can provide the observability, safety, and security enterprises desperately need as they adopt agentic AI at scale.

👤 About the Guest

Alex Schlager is the Founder and CEO of AIceberg, a company focused on detection and response for AI-powered workflows, from LLM-based chatbots to complex multi-agent systems.

AIceberg’s mission is to secure enterprise AI adoption using fully explainable machine learning models, avoiding black-box-on-black-box monitoring approaches. Alex has deep expertise in AI explainability, agentic systems, and enterprise AI risk management.

https://www.linkedin.com/in/alexschlager/

🧠 Key Topics We Cover

• Why AI agents create a new and expanding attack surface

• The rise of shadow AI across business functions

• Safety vs security in AI systems and why CISOs must now care about both

• How agentic AI amplifies risk through autonomy and tool access

• Explainable AI vs LLM-based guardrails

• Observability challenges in agent-based workflows

• Why traditional cybersecurity tools fall short in the AI era

• Governance, risk, and compliance for AI driven systems

• The future role of AI agents inside security teams

📌 Episode Highlights & Timestamps

00:00 – Introduction and welcome

01:05 – Alex Schlager’s background and the founding of AIceberg

02:20 – Why AI-powered workflows need new security models

03:45 – The danger of monitoring black boxes with black boxes

05:10 – Shadow AI and the loss of enterprise visibility

07:30 – Safety vs security in AI systems

09:15 – Real-world AI risks: hallucinations, data leaks, toxic outputs

12:40 – Why agentic AI massively expands the attack surface

15:05 – Privilege, identity, and agents acting on behalf of users

18:00 – How AIceberg provides observability and control

21:30 – Securing APIs, tools, and agent execution paths

24:10 – Data leakage, DLP, and public LLM usage

27:20 – Governance challenges for CISOs and enterprises

30:15 – AI adoption vs security trade-offs inside organizations

33:40 – Why observability is the first step to AI security

36:10 – The future of AI agents in cybersecurity teams

40:30 – Final thoughts and where to learn more

🎯 What You’ll Learn

• How AI agents differ from traditional software from a security perspective

• Why explainability is becoming critical for AI governance

• How enterprises can regain visibility over AI usage

• What CISOs should prioritize as agentic AI adoption accelerates

• Where AI security is heading in 2026 and beyond

🔗 Resources Mentioned

AIceberg: https://aiceberg.ai

AIceberg Podcast – How Hard Can It Be? https://howhardcanitbe.ai/

 Read More