Lean Into AI
August 17, 2025

Guardians of AI: Lexi Reese of Lanai On How AI Leaders Are Keeping AI Safe, Ethical, Responsible, and True

Staff
Authority Magazine

Build visibility before policies: You can’t govern something you can’t observe. Most AI governance frameworks are based on assumptions about how AI will be used rather than data about how it’s actually being used.

AsAI technology rapidly advances, ensuring its responsible development and deployment has become more critical than ever. How are today’s AI leaders addressing safety, fairness, and accountability in AI systems? What practices are they implementing to maintain transparency and align AI with human values? To address these questions, we had the pleasure of interviewing Lexi Reese.

Lexi Reese is CEO and co-founder of Lanai, the enterprise AI observability platform. Previously, she scaled billion-dollar systems at Google (ad platform) and Gusto (HR infrastructure), giving her deep experience in enterprise transformation. Lexi is a “practical visionary” focused on building the next generation of jobs and workforce transformation, but deeply focused on making AI enterprise adoption strategic and less chaotic. A former U.S. Senate candidate combines technical expertise with strategic leadership insights.

Thank you so much for your time! I know that you are a very busy person. Before we dive in, our readers would love to “get to know you” a bit better. Can you tell us a bit about your ‘backstory’ and how you got started?

I’m Lexi Reese, co-founder and CEO of Lanai. We solve a simple problem: CEOs have declared their companies “AI-first,” but they have no idea what AI their employees are actually using.

I’ve spent 30 years building teams and products through major tech shifts — the internet, cloud, mobile. I’ve overseen Google’s AI-powered ads infrastructure, Gusto’s payroll platform, and American Express’ small business financing. In all those years, I’ve never seen technology spread as quickly in enterprise settings as GenAI, or be entrusted with so much responsibility without oversight.

Here’s what’s happening: Legal feeds contracts into ChatGPT. Customer service automates with Claude. Marketing creates content with dozens of AI tools. Engineers code with assistants, both sanctioned, like Cursor, and unsanctioned, like DeepSeek. 80% of this action is invisible to IT because legacy security tools and Cloud Access Security Brokers were built for static files, not dynamic conversations.

We built the Datadog for AI conversations — the only AI native platform that dynamically and in real time detects interactions between humans and GenAI tools (eventually between agents), does real-time classification of use cases, and assesses risk of the prompts themselves, all while protecting employee privacy.

Keep Reading