AI is moving faster than security, and the gap is widening.
In this episode, Mehmet sits down with Walter Haydock, Founder of StackAware, to explore how organizations can safely deploy AI while managing growing risks across cybersecurity, compliance, and governance.
As AI systems become embedded in products, operations, and decision-making, traditional security approaches are no longer enough. From data leakage to supply chain vulnerabilities, and from regulatory pressure to investor scrutiny, AI introduces a new layer of complexity that leaders can no longer ignore.
Walter breaks down the emerging AI risk landscape, the importance of standards like ISO 42001, and why governance is becoming a competitive advantage, not just a compliance exercise.
⸻
👤 About the Guest
Walter Haydock is the Founder of StackAware, a company helping organizations measure and manage cyber, privacy, and compliance risks in AI systems.
He previously served as a Marine Corps officer and worked on Capitol Hill advising members of the U.S. House of Representatives. His experience spans government, cybersecurity, and enterprise software, giving him a unique perspective on managing risk in fast-moving technology environments.
Walter focuses on helping companies accelerate AI adoption responsibly while maintaining trust, security, and regulatory alignment.
https://www.linkedin.com/in/walter-haydock/
⸻
🔑 Key Takeaways
• AI risk is becoming a core cybersecurity challenge, not a separate discipline
• ISO 42001 introduces a structured way to manage AI governance and risk
• Many companies still treat compliance as a checkbox instead of an operational system
• AI supply chain risks are one of the biggest emerging threats
• Training AI on customer data without transparency can lead to backlash and liability
• Open-source AI tools introduce new attack vectors through plugins and dependencies
• AI governance is quickly becoming part of investor due diligence
• Companies that manage AI risk well will gain a competitive advantage
• Speed of decision-making matters more than perfect information in AI adoption
• Every company is becoming an AI company, whether they realize it or not
⸻
🎯 What You’ll Learn
• What ISO 42001 is and why it matters for AI-driven companies
• How AI risk differs from traditional cybersecurity risk
• The biggest vulnerabilities in the AI supply chain
• How attackers are already using AI to accelerate cyber threats
• Why governance frameworks are essential for scaling AI safely
• How regulations in the US and EU are shaping AI adoption
• The role of AI governance in fundraising and M&A due diligence
• Practical first steps to assess and manage AI risk
• How to balance innovation speed with compliance requirements
• Why AI governance will become table stakes for every business
⸻
⚡ Episode Highlights (Chapters)
00:00 Introduction and guest background
02:30 What is ISO 42001 and why it exists
05:00 Why AI governance is becoming critical
07:00 Who needs AI compliance the most
10:00 Regulation across the US, EU, and globally
13:00 Innovation vs regulation: finding the balance
18:00 AI supply chain risks explained
21:00 Open source AI and new attack vectors
25:00 Why AI risk management will be mandatory
27:30 AI in due diligence and fundraising
30:00 Future threats and AI-driven attacks
32:00 First steps for managing AI risk
34:00 Leadership mindset and decision making
37:00 Who owns AI risk inside organizations
39:00 Closing thoughts
⸻
🔗 Resources Mentioned
• StackAware: https://stackaware.com/
• ISO 42001 (AI Management System Standard): https://www.iso.org/standard/42001