A Reddit user who claimed to be a whistleblower at a major food delivery app has now been exposed as a fraud. The user posted a viral story alleging that the company exploited both drivers and customers through deceptive internal practices.
“You guys always suspect the algorithms are rigged against you, but the reality is actually so much more depressing than the conspiracy theories,” the supposed whistleblower wrote.
The poster claimed to be drunk and using public Wi-Fi at a library while typing a lengthy confession describing how the company allegedly exploited legal loopholes to steal drivers’ tips and wages without consequences.
The claims sounded plausible. After all, DoorDash was previously sued for stealing tips from drivers and ultimately agreed to a $16.75 million settlement. But in this case, the story itself turned out to be fabricated.
While people lie online constantly, it’s rare for a hoax of this nature to reach Reddit’s front page, rack up more than 87,000 upvotes, and then spread to other platforms. On X, the post received an additional 208,000 likes and more than 36.8 million impressions.
Casey Newton, the writer behind the newsletter Platformer, attempted to verify the claims. Newton contacted the Reddit user, who then moved the conversation to Signal. There, the poster shared what appeared to be a photo of an Uber Eats employee badge, along with an 18-page “internal document” describing how AI was supposedly used to calculate a “desperation score” for individual drivers.
As Newton dug deeper, he realized the materials were fabricated and that he was dealing with an AI-generated hoax.
“For most of my career up until this point, the document shared with me by the whistleblower would have seemed highly credible in large part because it would have taken so long to put together,” Newton wrote. “Who would take the time to put together a detailed, 18-page technical document about market dynamics just to troll a reporter? Who would go to the trouble of creating a fake badge?”
Deception of journalists isn’t new, but the rise of generative AI has significantly raised the stakes. Fact-checking has become more complex as synthetic text, images, and documents become easier to produce and harder to identify.
In this case, Newton was able to confirm the image was AI-generated using Google’s Gemini, which detected a SynthID watermark. The watermark is designed to survive cropping, compression, filtering, and other forms of manipulation.
Max Spero, founder of Pangram Labs, works directly on tools meant to identify AI-generated text.
“AI slop on the internet has gotten a lot worse, and I think part of this is due to the increased use of LLMs, but other factors as well,” Spero told TechCrunch. “There’s companies with millions in revenue that can pay for ‘organic engagement’ on Reddit, which is actually just trying to go viral with AI-generated posts that mention your brand name.”
Detection tools like Pangram can help flag synthetic text, but they are far less reliable for multimedia content. Even when a post is definitively proven to be fake, the damage is often already done — the content may have gone viral long before the truth catches up.
For now, that leaves users scrolling through social feeds like amateur investigators, constantly questioning whether what they’re seeing is real.
Case in point: when I told an editor I wanted to write about “the viral AI food delivery hoax on Reddit this weekend,” she assumed I was referring to a different story.
Yes — there was more than one viral AI-powered food delivery hoax on Reddit this weekend.











