Wednesday, April 8th, 2026

Webinar
Trustworthy AI by Design: Guardrails, Governance, and Responsible Deployment

A practitioner-led session on securing AI systems, embedding safety controls, and operationalising governance frameworks that hold up in real enterprise environments — not just in theory.

Trusted by Industry Leaders for Delivering Cutting-Edge, AI-Driven Solutions That Drive Success.  

abstract

AI security is no longer a niche concern for security teams. As organisations move AI from experimentation into production — processing real data, automating real decisions, and operating with increasing autonomy — the risks are no longer theoretical. Yet most deployments still lack the controls, oversight structures, and governance frameworks needed to manage those risks responsibly.

This session takes a grounded, practical look at what securing AI in production actually requires. We examine how AI systems fail differently from traditional software, walk through the most critical threats facing LLM and agentic deployments today, and explore how to build layered guardrails that address those threats — covering input controls, processing constraints, and output validation.

We also examine the governance layer. This includes how to apply frameworks such as NIST AI RMF and the EU AI Act as engineering decisions rather than compliance exercises, what meaningful human oversight looks like in practice, and how agentic AI specifically changes the risk profile and the controls that autonomous systems require.

Participants will leave with a clear view of where their AI systems are most exposed, how to prioritise the controls that matter most, and a practical action framework for next steps.

webinar Details

Date

TBD · 2026

Time

7:00 AM EST (New York)
2:00 PM KSA (Riyadh)
3:00 PM GST (Dubai)

duration

1 Hour

SPEAKER

Jeevan Sreerama

Principal AI Architect Soothsayer Analytics

Jeevan Sreerama is the Principal AI Architect at Soothsayer Analytics, with nearly two decades of experience spanning software engineering, artificial intelligence and machine learning, and enterprise-scale system design. His technical depth covers the full AI stack — from classical machine learning, deep learning, natural language processing, and computer vision through to Generative AI and Agentic AI — with a core strength in engineering-first AI: building systems that are reliable, observable, secure, and scalable in real production environments, not merely correct in isolation.

As an architect who has delivered across Azure, AWS, and GCP, Jeevan specialises in embedding safety controls, evaluation pipelines, and model-risk governance into AI systems that operate under demanding enterprise requirements — including alignment with GDPR, SOC 2, and the NIST AI Risk Management Framework. His work makes AI trustworthy by design, not by afterthought. A BCS Fellow and IEEE Senior Member, he is recognised for combining engineering rigour with strategic clarity.

Key Takeaways

This session is designed to provide practical clarity, not just conceptual understanding. Participants will leave knowing how to:

Why AI Security Is Different

01

Understand why AI systems require a different security approach and where traditional controls fall short.

The Threat Landscape

02

Identify the most critical risks facing LLM and agentic deployments today, using the OWASP Top 10 for LLM Applications as a practical reference.

Layered Guardrails

03

Design input, processing, and output controls that enforce safe model behaviour without compromising system utility.

Governance Without the Theatre

04

Understand how to apply NIST AI RMF, the EU AI Act, and ISO 42001 as practical engineering decisions, including what meaningful human oversight requires.

Securing Agentic AI

05

Understand why autonomous agents require additional controls, and how to apply least-privilege scoping, sandboxed execution, and approval gates in practice.

What to Do Next

06

Identify the most common governance failures and leave with a practical action framework for your own AI deployments.

Participants will leave with a clear action framework to assess readiness, prioritize use cases, and initiate responsible Agentic AI implementation within their organization.

Who Should Attend

Risk, Governance, and Compliance Professionals

Professionals responsible for ensuring AI systems operate within defined governance frameworks, with transparency, accountability, and human oversight.

AI Engineers & Architects

Technical practitioners designing, building, or evaluating LLM and agentic AI systems for enterprise deployment.

CXOs and Senior Executives

Executives seeking clarity on what responsible AI deployment requires and how to establish accountability structures across their organisations.

IT and Digital Transformation Leaders

Technology leaders evaluating AI architecture, security posture, and governance requirements for deployments in complex or regulated environments.

AI & Data Science Teams

Teams moving AI systems from experimentation into production and navigating the security, compliance, and operational requirements that entails.

Product & Operations Leaders

Leaders responsible for AI-enabled products and workflows who need to understand the governance, oversight, and risk management requirements before going live.

REGISTER NOW

If you are asking, “Where can AI move beyond content generation and actually run parts of our process responsibly?” this session is for you.