AI chatbots are turning into accidental snitches — and in some cases, they’re handing out real people’s phone numbers to total strangers.
Privacy experts are sounding the alarm over a disturbing trend dubbed “AI doxxing,” where bots like Google Gemini and OpenAI ChatGPT surface personal contact information without consent.
One Reddit user said their nightmare began when Google’s AI allegedly started giving out their personal number as a placeholder for businesses and services.
“Strangers are calling me constantly looking for a lawyer, a product designer, a locksmith – you name it,” the user wrote, adding callers kept saying: “I got your number from Google’s AI.”
The Redditor called it a “massive privacy violation and data leak,” saying their phone had become a nonstop hotline for confused strangers and “My daily life is being completely disrupted.”
“Gemini’s problem is not a defect. It’s the result of unchecked years of data brokerage practices that meet generative AI,” a spokesperson for privacy firm ClearNym told The Independent.
They noted that years of harvested personal data are now colliding with AI systems trained on massive internet datasets.
“It now returns as accurate copies or even fabrications and, most recently, as ‘placeholder’ phone numbers for any number of strangers,” they warned.
And it’s not just random glitches causing chaos.
Virgin Media O2 also recently reported that scammers are planting fake customer-service numbers online for AI chatbots to regurgitate back to users.
“Criminals know when people search for help, they’re often looking for a quick answer,” said Murray Mackenzie, the company’s fraud prevention director.
“AI tools are creating new opportunities for fraudsters to create realistic-looking fake numbers that appear through search results or chatbots, putting people at risk of calling a criminal rather than their trusted provider.”
Researchers at AI security company Aurascape told The Independent that scammers accomplish this by “seeding poisoned content” across the web.
“Attackers are quietly rewriting the web that AI systems read,” said lead security researcher Qi Deng.
“When you ask an assistant how to call your airline, it does exactly what it was designed to do, but with a customer support and reservations number that leads straight to a scammer instead of the real company.”
Other cases appear even more invasive.
MIT Technology Review reported that Gemini mistakenly listed Israeli software engineer Daniel Abraham’s personal number as customer support for a payment app.
Meanwhile, researchers at the University of Washington discovered Gemini could expose personal contact info with alarming ease.
“One day, I was just playing around on Gemini, and I searched for Yael Eiger, my friend and collaborator,” said PhD student Meira Gilbert.
Gemini also surfaced her private cell number. “It was shocking,” Gilbert said.
Her colleague, Yael Eiger, said the information technically existed online before — but buried deep enough that almost nobody would find it.
“Having your information be … accessible to one audience, and then Gemini making it accessible to anyone” feels completely different, Eiger said.
DeleteMe CEO Rob Shavell told the outlet that complaints about AI exposing personal data have surged recently, with customers reporting chatbots revealing “accurate home addresses, phone numbers, family members’ names, or employer details.”
A spokesperson for Google told MIT Technology Review the company has safeguards in place to prevent personal information from appearing in AI features and reviews requests for removal.
Still, some users say help has been hard to come by.
“Standard support forms are a complete dead end,” the aforementioned Redditor wrote. “I haven’t received a single response, and the harassment continues daily.”
The AI privacy mess comes as scammers are increasingly weaponizing the technology in other alarming ways, too.
As previously reported by The Post, Long Island officials recently warned that fraudsters are using AI voice-cloning tools to impersonate victims’ grandchildren in desperate phone calls targeting seniors.
The scammers allegedly scour TikTok and other social media platforms for videos of young people speaking, then use the audio to generate realistic fake voices demanding bail money or emergency cash.
“They’re always trying to stay a step ahead,” Suffolk County Police Commissioner Kevin Catalina previously told The Post.
Catalina warned that the schemes are becoming “more and more sophisticated” as AI advances, with elderly victims losing thousands of dollars to convincing synthetic voices and spoofed phone numbers.
Read the full article here














