"Even worse was the suggestion by Grammarly’s A.I. version of me to replace the first sentence of the news article with an anecdotal opening describing a fictional person named Laura whose privacy had been violated.
“Laura, a patient searching for relief from a chronic condition, clicks through her hospital’s website to schedule an appointment. In just a few moments, her most private medical details — her reason for visiting, her doctor’s name and even the treatment she seeks — are quietly sent to Facebook, without her knowledge,” the bot suggested with a button allowing the user to paste that excerpt straight into the article.
Replacing a factual sentence with an imagined story about a person who doesn’t exist is not only bad editing. It’s a deception that could end my career as a journalist (or the career of any journalist who took that terrible advice).
And this is the problem with A.I. It doesn’t know truth from fiction. It doesn’t know an investigative news article from an offhand comment. It flattens all content into word associations.
What Grammarly made wasn’t a doppelgänger. As the writer Ingrid Burrington wrote on Bluesky, it was a sloppelgänger — A.I. slop masquerading as a person.
And it must be stopped."
Miguel Afonso Caetano
in reply to Miguel Afonso Caetano • • •"Grammarly will face a class action lawsuit over its “Expert Review” feature that dispensed writing advice using the names of prominent journalists, academics, and authors, Wired reported on Wednesday.
Technology journalist Julia Angwin is the only named plaintiff on the original filing. Angwin’s lawyer, Peter Romer-Friedman, said on Thursday morning that he’s since heard from “40 to 50 people” who object to being featured in the AI-powered editing tool without their consent."
niemanlab.org/2026/03/journali…
Journalist Julia Angwin files class action lawsuit over Grammarly’s AI “sloppelgangers”
Nieman Lab