AI Data Training Scandal: App That Paid Users to Record Calls Goes Offline After Major Security Lapse Exposes Private Data
Share- Nishadil
- September 28, 2025
- 0 Comments
- 3 minutes read
- 7 Views

A disturbing revelation has sent shockwaves through the tech world, exposing the dark underbelly of AI data collection. A controversial app, known as 'Zhixun Dianhua' or 'Listen with AI,' which brazenly paid users to record their phone calls for AI training purposes, has abruptly ceased operations.
This dramatic shutdown follows a severe security lapse that left a vast trove of highly sensitive user data and personal call recordings exposed for anyone to access, raising urgent questions about privacy, ethics, and the unregulated rush for AI data.
The app, reportedly operated by a company linked to Chinese AI giant iFlytek, promised users monetary compensation in exchange for recording their private conversations.
The stated goal was to gather diverse voice data to train sophisticated AI models, presumably for speech recognition or natural language processing. While the concept of crowdsourced data for AI is not new, directly incentivizing the recording of personal phone calls crosses a significant ethical line, blurring the boundaries of consent and data exploitation.
The critical flaw came to light when cybersecurity researchers discovered an exposed internal dashboard.
This dashboard, meant for internal management, was left completely unsecured, accessible without any authentication. It served as a gaping portal to a treasure trove of user information, including detailed demographics, user IDs, and, most alarmingly, the actual audio recordings of phone calls made by participants.
The potential for misuse of such intimate data – from blackmail to identity theft – is immense and deeply concerning.
The immediate aftermath saw a swift response, albeit one that came too late for countless users whose privacy had already been compromised. Following the public disclosure of the security lapse, the 'Zhixun Dianhua' app, along with its associated services, was quickly taken offline.
While this move prevents further data collection and exposure from that specific platform, it does little to address the existing breach or the damage already inflicted upon user trust.
This incident serves as a stark and urgent reminder of the precarious balance between technological advancement and fundamental human rights to privacy.
It underscores the critical need for more robust regulatory frameworks and ethical guidelines for AI development, particularly concerning the collection and handling of sensitive personal data. Companies engaged in AI training must prioritize data security and transparent consent mechanisms above all else, rather than pushing the boundaries of what is acceptable in their quest for ever-larger datasets.
The 'Listen with AI' scandal is a cautionary tale, revealing the profound risks when profit and innovation overshadow privacy and ethical responsibility.
.Disclaimer: This article was generated in part using artificial intelligence and may contain errors or omissions. The content is provided for informational purposes only and does not constitute professional advice. We makes no representations or warranties regarding its accuracy, completeness, or reliability. Readers are advised to verify the information independently before relying on