By Dex Monroe|April 5, 2026|5h ago|4 min read|🤖 AI-assisted
Listen to article
Grammarly's 'Expert Review' Feature Sparks Outrage Over Unauthorized Use of Likenesses
4 min listen
Grammarly’s controversial new feature, “Expert Review,” has sparked backlash after it was revealed to be using the likenesses of renowned writers and academics without permission.
Grammarly's recent unveiling of its “Expert Review” feature has ignited a firestorm of criticism, as it has come to light that the AI writing assistant utilized the names and likenesses of prominent authors and academics without their consent. What was initially billed as a game-changing addition to the platform has now morphed into a scandal that raises serious ethical questions about AI and intellectual property rights.
Originally founded as a digital writing assistant, Grammarly has evolved into a more ambitious AI-centric entity, rebranding itself as Superhuman in October 2025. This pivot coincided with their acquisition of the AI email client Superhuman Mail, and marked a clear shift towards integrating AI tools into its offerings. Despite the change, the Grammarly brand was promised to remain intact, suggesting that the company was determined to expand its capabilities while maintaining its core audience.
The “Expert Review” feature, launched quietly in August 2025, was intended to tap into the credibility of well-known experts by providing users with AI-generated writing suggestions "inspired by" these figures. However, as reports surfaced, it was revealed that the feature was using names like Stephen King, Neil deGrasse Tyson, and Carl Sagan with little more than a mild disclaimer to indicate that these individuals had no affiliation with Grammarly.
This disclaimer—indicating that the quoted experts did not endorse the service—was insufficient to quell the outrage that ensued once news broke. Wired's exposé on March 4th unveiled that the tool was using the likenesses of deceased professors and other notable personalities, generating a wave of backlash from users and experts alike. The issue? The experts were neither consulted nor compensated for their likenesses, leading to accusations of unethical practices in the use of AI-generated content.
The digital content landscape is becoming increasingly complex as AI tools proliferate, bringing both convenience and controversy. As companies like Grammarly expand their offerings, they must navigate a minefield of ethical concerns. The "Expert Review" debacle underscores a critical point: what constitutes fair use in an era where AI can easily generate outputs based on the work and reputation of others?
Critics argue that by leveraging the names of established figures without their consent, Grammarly not only undermines the integrity of those individuals but also risks damaging its own reputation. Trust is paramount in the tech space, and for a brand like Grammarly, which markets itself on enhancing the quality of writing, ethical missteps can have far-reaching consequences.
With AI technology advancing rapidly, the implications of this situation could extend beyond just Grammarly. As more companies incorporate AI-generated content into their platforms, the need for clear guidelines and ethical standards becomes increasingly urgent. Who owns the rights to a name or likeness in the digital landscape? How do we protect the intellectual property of individuals who, in some cases, may no longer be able to speak for themselves?
In light of these questions, Grammarly has since removed the problematic feature, but the damage may already be done. The company faces an uphill battle to regain the trust of its user base, especially as consumers become more aware and critical of how AI tools operate.
As the tech industry grapples with these evolving challenges, the Grammarly saga serves as a cautionary tale. Companies must prioritize ethical considerations alongside innovation to build a sustainable future—one where creativity and technology coexist without infringing on the rights of individuals.
In an age where every keystroke can be monitored and modified by AI, the responsibility lies with companies to ensure they are not just pushing the boundaries of what technology can do, but also respecting the individuals who have paved the way for those advancements.
As the conversation around AI continues to evolve, it’s crucial for both tech creators and consumers to remain vigilant. The lessons learned from Grammarly's misstep could ultimately shape the future of AI development and the ethical standards that will govern it.
For those interested in leveraging AI responsibly, the landscape is ripe for discussion. As we move forward, let this incident serve as a reminder of the importance of consent and transparency in the digital age.
OpenAI is shutting down its AI video generator Sora just months after Disney signed a landmark licensing deal. The $1 billion investment is dead.
By Jett Vega · 6 min read
10 Smart Home Upgrades That Win March Madness 2026
Transform your March Madness watch party with 10 smart home upgrades under $100 each. From stadium-level audio to automated snack prep, here's how to create the ultimate tournament viewing experience.
By Jett Vega · 6 min read
BREAKING
Microsoft Just Dropped $10B on Japan's AI Future
Microsoft's massive $10 billion investment in Japan's AI infrastructure could reshape global tech strategy. Here's what this bold bet means for cybersecurity, gaming, and your tech stack.
By Jett Vega · 6 min read
Google Just Dropped The Most Game-Changing AI Update of 2026
Google's March Gemini Drop introduces chat history migration, free Personal Intelligence, and 3-minute AI music generation — changing the AI landscape forever.