Lean Into AI
September 15, 2025

Most enterprise AI use is invisible to security teams

Mirko Zorz
Help Net Security

Most enterprise AI activity is happening without the knowledge of IT and security teams. According to Lanai, 89% of AI use inside organizations goes unseen, creating risks around data privacy, compliance, and governance.

This blind spot is growing as AI features are built directly into business tools. Employees often connect personal AI accounts to work devices or use unsanctioned services, making it difficult for security teams to monitor usage. Lanai says this lack of visibility leaves companies exposed to data leaks and regulatory violations.

AI use cases hiding in plain sight

In healthcare, workers used AI tools to summarize patient data, raising HIPAA concerns. In the financial sector, teams preparing for IPOs unknowingly moved sensitive information into personal ChatGPT accounts. Insurance companies used embedded AI features to segment customers by demographic data in ways that could violate anti-discrimination rules.

Lexi Reese, CEO of Lanai, said one of the most surprising discoveries came from inside tools that had already been approved by IT.

“One of the biggest surprises was how much innovation was hiding inside already-sanctioned apps (SaaS and In-house apps). For example, a sales team discovered that uploading ZIP code demographic data into Salesforce Einstein boosted upsell conversion rates. Great for revenue, but it violated state insurance rules against discriminatory pricing.

“On paper, Salesforce was an ‘approved’ platform. In practice, the embedded AI created regulatory risk the CISO never saw.”

Lanai says these examples reflect a larger trend. AI is often embedded inside tools like Salesforce, Microsoft Office, and Google Workspace. Because these features are part of tools employees already use, they can bypass traditional controls like data loss prevention and network monitoring.

Keep Reading