
Key Takeaways
- Grammarly, now operating under parent company Superhuman, faces intense backlash for its AI “Expert Review” feature, which generates writing suggestions framed as coming from specific experts without their permission.
- The tool utilizes the names and published works of both living and deceased individuals, including prominent journalists and academics, training its models on their publicly available texts.
- Critics from featured experts and ethicists condemn the practice as exploitative, arguing it misrepresents editorial work and raises profound ethical and legal questions about consent and digital legacy.
- The controversy erupts amid global scrutiny of AI data practices, potentially influencing ongoing debates about copyright law, fair use, and the ethics of “digital resurrection.”
- Superhuman’s defense rests on the public availability of source material, positioning the company at the aggressive forefront of commercial AI data utilization.
In March 2026, Grammarly—which had rebranded under its parent company name, Superhuman, in late 2025—ignited a major ethics firestorm. The catalyst was its AI-powered “Expert Review” feature, which drew fierce condemnation from the very experts whose identities and life’s work were used without consent to commercially power the tool’s writing suggestions. The feature, promising feedback “from the perspective” of specific authorities, has become a flashpoint in the broader debate over how artificial intelligence appropriates human creativity and reputation.
The Mechanics of Appropriation and the Ethical Breach
Launched in August 2025, the “Expert Review” feature allows users to select from a list of subject matter experts. The underlying AI model, trained on a corpus of that individual’s publicly available writings, then generates stylistic and analytical suggestions framed as emanating from that expert’s viewpoint. The list includes living journalists from major outlets like The New York Times, The Atlantic, and Wired, as well as figures like the renowned historian David Abulafia, who passed away in December 2025.
The ethical breach is multifaceted. First is the fundamental lack of consent for the commercial exploitation of a professional’s identity and hard-earned reputation. The tool effectively creates a digital simulacrum of an expert, implying an endorsement or involvement that does not exist. This issue is particularly acute in the case of recently deceased individuals like Abulafia, whose estates have had no opportunity to manage or consent to this posthumous use of their legacy—a practice critics are calling a form of “digital resurrection.”
Second, the feature operates on a flawed premise, according to its detractors. It suggests an AI can replicate the private, nuanced editorial judgment of a human expert. Vanessa Heggie, a historian of science and medicine at the University of Birmingham, whose work was reportedly included, labeled the practice “obscene.” She and others argue it reduces complex, contextual expertise to a pattern-matching algorithm, inherently incapable of true understanding or the ethical responsibilities of peer review.
Furthermore, the tool has demonstrated tangible inaccuracies, such as presenting living experts with outdated job titles or affiliations. This undermines its claimed authority. Nilay Patel, Editor-in-Chief of The Verge, critiqued the core concept, stating that such features misunderstand that expert analysis is not merely a style to be mimicked but a process of critical thinking that cannot be automated.
The commercial calculus is clear for Superhuman: offering “expert” feedback differentiates its product in a crowded market. For the individuals featured, however, it represents a non-consensual appropriation of their persona for another entity’s profit, with no compensation, attribution beyond name-dropping, or control over how their digital ghost is deployed.
Broader Implications for AI, Copyright, and Digital Legacy
The Grammarly “Expert Review” controversy is not an isolated incident but a concentrated example of tensions simmering across the AI industry. It directly engages with several of the most contentious issues in technology policy today.
Legally, it presses on the ambiguous boundaries of copyright law and the “fair use” doctrine as applied to AI training. Superhuman’s defense, articulated by company Vice President Alex Gay, hinges on the argument that the tool uses “publicly available and widely cited” works. This represents a maximalist interpretation of current law, testing whether the ingestion of copyrighted material for commercial AI training and the subsequent generation of outputs that evoke a specific author’s style constitutes infringement or a transformative fair use. The case could provide a real-world test for emerging legislation aimed specifically at AI, such as the proposed EU AI Act’s provisions on transparency and copyright.
Ethically, it forces a societal conversation about digital legacy and autonomy in the AI age. The use of Abulafia’s work highlights the question: who controls a person’s intellectual footprint after death? The ability of AI to reconstitute a version of a person’s voice or analytical style posthumously—without permission—raises profound questions about memory, respect, and the potential for manipulation.
Competitively, the move signals an aggressive stance on data utilization. In the race to enhance AI writing assistants like QuillBot and Wordtune, Superhuman appears willing to leverage publicly available data to its fullest extent, betting that legal permissibility will outpace ethical objections. This could pressure competitors to follow suit or, conversely, create a market opportunity for tools that champion fully permission-based, ethically sourced training models.
The potential for legal repercussions is significant. Affected individuals or estates could pursue class-action lawsuits alleging violation of publicity rights, copyright infringement, or unfair commercial practices. Such litigation could become a catalyst, forcing courts to establish clearer precedents and potentially accelerating the creation of industry-wide standards for consent and attribution.
Industry and Expert Reaction: A Defense and a Demand for Change
The reaction to the feature has crystallized into a stark confrontation between Superhuman’s corporate logic and the values of the expert community it sought to leverage.
Superhuman’s defense, as presented by VP Alex Gay, is narrowly legalistic and utilitarian. Gay stated the feature is intended to “direct users to some of the most influential voices in the public discourse” and is built on a foundation of “publicly available” source material. This framing deliberately sidesteps ethical questions of consent, focusing instead on what is technically allowable under a contested reading of current law. It portrays the tool as a conduit to expertise, even as it bypasses the experts themselves.
This stance has been met with unified and vehement criticism from across media, academia, and digital ethics circles. The backlash, which peaked in early March 2026, is not merely about one feature but what it represents: a pattern of extraction where human creativity is mined as a free resource for AI development. Critics argue that “publicly available” should not equate to “free for commercial appropriation without permission or partnership.”
The controversy has amplified a growing demand from content creators, journalists, and scholars for transparent opt-out mechanisms and robust consent frameworks for AI training datasets. It has also heightened user skepticism about the reliability of AI tools that claim authoritative expertise. If the “expert” in the tool has outdated information, what does that say about the accuracy of its advice?
Educationally, the incident provides a concrete, troubling case study. Institutions grappling with AI policy must now consider not only plagiarism by students but also the plagiarism of scholars and professionals by the very tools students might use. It underscores the need for digital literacy that includes a critical understanding of how these systems are built and whose labor they repurpose.
The Bottom Line
Grammarly’s “Expert Review” controversy has done more than generate bad press for Superhuman, it has crystallized abstract debates about AI ethics into a concrete, public dispute over consent, identity, and commercial exploitation. The company has made a calculated, high-risk bet that operating at the furthest edge of legal permissibility is a viable business strategy, even in the face of significant ethical condemnation.
The outcome of this bet will be closely watched across the tech and creative industries. A lack of meaningful legal or regulatory consequence could establish a troubling precedent, normalizing the non-consensual appropriation of professional identity as a standard AI practice. Conversely, successful lawsuits, regulatory action, or severe brand damage could accelerate the push for robust legal and ethical frameworks that explicitly govern how AI systems use personal and professional legacies.
The key developments to monitor will be any formal legal challenges from affected parties, responses from data protection and copyright regulators, and whether competitors in the AI writing space seize the opportunity to differentiate themselves by championing permission-based, ethically transparent models. This controversy is a stark reminder that in the race to build more powerful AI, the question of how we build it—and whose voices we use without asking—remains one of the most critical challenges of the decade.


