A federal class-action lawsuit filed in August 2025 has thrust popular AI transcription service Otter AI into the spotlight, with allegations that the company "deceptively and surreptitiously" records private conversations without proper consent.
The lawsuit, filed in the U.S. District Court for the Northern District of California, represents a growing pattern of privacy concerns surrounding the Mountain View-based company that has processed more than 1 billion meetings since 2016 for its 25 million users.
The case raises fundamental questions about consent, privacy, and data use in the age of AI-powered workplace tools—questions that extend far beyond a single company to the entire automated transcription industry.
💡 Before we proceed, you might want to check out:
What Are the Key Allegations in the Otter AI Lawsuit?
Source: NPR
The lawsuit centers on plaintiff Justin Brewer of San Jacinto, California, who alleges his privacy was "severely invaded" upon realizing Otter was secretly recording a confidential conversation.
The complaint alleges that Otter Notebook, which provides real-time transcriptions of Zoom, Google Meet, and Microsoft Teams meetings, "by default does not ask meeting attendees for permission to record and fails to alert participants that recordings are shared with Otter to improve its artificial intelligence systems."
The system can automatically join meetings. As detailed in the lawsuit:
"If the meeting host is an Otter accountholder who has integrated their relevant Google Meet, Zoom, or Microsoft Teams accounts with Otter, an Otter Notetaker may join the meeting without obtaining the affirmative consent from any meeting participant, including the host."
The legal filing claims this practice violates both state and federal privacy and wiretap laws, with lawyers arguing that Otter uses these recordings to "derive financial gain" through AI model training.
The lawsuit also challenges Otter's "de-identification" process. It argues that "Otter's deidentification process does not remove confidential information or guarantee speaker anonymity" and that the company "provides no public explanation of its 'de-identifying' process."
Other User Incidents Concerning Otter's Consent and Recording Issues
Real users have reported their own experiences with Otter's privacy issues, providing concrete examples of the problems outlined in the lawsuit:
1. The VC Meeting Data Leak
Alex Bilzerian described a particularly damaging incident in a viral tweet:
Lior Yaari, CEO and Co-Founder at Grip Security, shared a similar incident on LinkedIn that occurred when one of their sales managers signed up for Otter.
2. Corporate Security Breaches
Attorney Anessa Allen Santos warned of organizational security risks in a LinkedIn PSA:
3. Otter Spam Warning
A Reddit user in r/projectmanagement described how Otter automatically places links in meetings to share notes with others, causing confusion for those unfamiliar with the service:
4. Interview Incident
One Reddit user shared how they lost a job opportunity:
"I am so upset - yesterday I was in an interview, and one of the interviewers asked me if it was being recorded (apparently, Otter AI had joined without my permission or knowledge). To my horror, I found out only after, that I believe it was indeed recorded- and sent out to me and all the interviewers! Otter AI is seriously intrusive and I have no idea who now has access to a private meeting that I ensured all was NOT being recorded."
5. Gartner Community Response
IT professionals on Gartner's peer community raised concerns about the tool.
A VP Information Security Officer commented:
"We block otter ai.. Way too much data for a company to have without a contract, security review and relationship."
A Director of Engineering added:
"I think most companies policies are to not have these capabilities enabled until the understand the data risk posed, infrastructure, used for training or not, etc."
What Does Otter's Privacy Policy Explain About These Privacy Issues?
Otter.ai's official privacy policy and privacy & security pages reveal practices that many users may not fully understand.
While the company emphasizes security and privacy protections, the detailed policies show extensive data collection, usage, and sharing that goes well beyond simple transcription.
1. Data Collection and AI Training
Otter's privacy policy openly states it uses customer data for AI training purposes:
"We train our proprietary artificial intelligence technology on de-identified audio recordings. We also train our technology on transcriptions to provide more accurate services, which may contain Personal Information."
The policy also reveals manual review of recordings when users provide consent:
"We obtain explicit permission (e.g. when you rate the transcript quality and check the box to give Otter ai and its third-party service provider(s) permission to access the conversation for training and product improvement purposes) for manual review of specific audio recordings to further refine our model training data."
The privacy & security FAQ claims that "audio recordings and transcripts are not manually reviewed by a human" as part of the automatic training process, yet the privacy policy clearly states that manual review occurs when users provide explicit permission. This creates ambiguity about when human access to recordings actually happens.
2. Extensive Third-Party Data Sharing
The breadth of third-party data sharing outlined in Otter's privacy policy is significant. The company shares personal information with multiple categories of external parties, including:
- "Cloud service providers who we rely on for compute and data storage, including Amazon Web Services, based in the United States"
- "Data labeling service providers who provide annotation services and use the data we share to create training and evaluation data for Otter's product features"
- "Artificial intelligence service providers that provide backend support for certain Otter product features"
- "Platform support providers who help us manage and monitor the Services, including Amplitude, which is based in the U.S. and provides user event data for our Services"
3. The De-identification Problem
Otter claims to use "a proprietary method to de-identify user data before training our models so that an individual user cannot be identified," but provides no public explanation of how this process works.
The privacy policy also acknowledges that transcriptions used for training "may contain Personal Information," which raises questions about anonymization effectiveness.
4. Automatic Data Collection
The privacy policy reveals extensive automatic data collection beyond recordings and transcripts:
- Usage Information: "When you use the Services, you generate information pertaining to your use, including timestamps, such as access, record, share, edit and delete events, app use information, screenshots/screen captures taken during the meeting, interactions with our team, and transaction records"
- Device Information: "We assign a unique user identifier ('UUID') to each mobile device that accesses the Services"
- Location Information: "When you use the Services, we receive your approximate location information"
How Safe is Otter AI
The privacy concerns surrounding Otter.ai reflect broader issues that affect anyone using AI-powered workplace tools.
Organizations in heavily regulated industries face potential violations when sensitive conversations are automatically recorded and processed by third-party services without proper consent mechanisms or data controls.
Individual employee adoption of cloud-based AI tools can create organization-wide data exposure, especially when these tools share information with multiple external processors and lack enterprise-grade security controls.
Automatic recording features can damage business relationships, compromise confidential discussions, and create legal exposure when private conversations are captured without all participants' knowledge.
The fundamental issue extends beyond any single company. As AI tools become more automated, the gap between user expectations and actual data practices widens, creating risks that users may not realize they're accepting.
Is There a Safer Otter AI Alternative?
The privacy concerns around Otter AI highlight the need for transcription solutions that prioritize user privacy and data control.
For organizations and individuals seeking the benefits of AI-powered transcription without compromising privacy, local-first alternatives offer a compelling option.
Char is the best Otter AI alternative and the only truly local AI notetaker. Unlike cloud-based services that upload and analyze conversations on remote servers, Char ensures that no meeting data ever leaves your device.
This approach addresses the core issues raised in the Otter AI lawsuit and user complaints:
- Complete data sovereignty: You maintain full control over when and what gets recorded, with no risk of unauthorized access
- Zero third-party exposure: Your conversations never leave your device, eliminating data breaches and unauthorized sharing
- No AI training on your data: Your private conversations aren't used to improve external AI models
- Simplified compliance: Local processing reduces the compliance surface area by minimizing dependency on cloud vendors, making audits easier and giving enterprises complete control over data handling
- Complete transparency: As an open-source solution, you can inspect the code, customize functionality, and verify exactly how your conversations are processed
For enterprises navigating compliance frameworks like HIPAA, SOC 2, or GDPR, local processing reduces risk by keeping sensitive data on-device—not in transit, not in the cloud, and not exposed to third-party vendors. This approach makes it easier to fulfill data subject rights, maintain audit trails, and enforce access controls without relying on external processors.
Download Char for MacOS and experience truly private, local-first AI transcription.
