BitcoinWorld Grammarly Lawsuit Explodes as AI ‘Expert Review’ Faces Class Action Over Unauthorized Impersonation In a landmark legal challenge that strikes at BitcoinWorld Grammarly Lawsuit Explodes as AI ‘Expert Review’ Faces Class Action Over Unauthorized Impersonation In a landmark legal challenge that strikes at

Grammarly Lawsuit Explodes as AI ‘Expert Review’ Faces Class Action Over Unauthorized Impersonation

2026/03/13 01:00
Okuma süresi: 8 dk
Bu içerikle ilgili geri bildirim veya endişeleriniz için lütfen crypto.news@mexc.com üzerinden bizimle iletişime geçin.

BitcoinWorld

Grammarly Lawsuit Explodes as AI ‘Expert Review’ Faces Class Action Over Unauthorized Impersonation

In a landmark legal challenge that strikes at the heart of AI ethics and digital identity, journalist Julia Angwin has filed a class action lawsuit against Grammarly’s parent company Superhuman, alleging the writing assistant platform turned her and hundreds of other experts into unauthorized ‘AI editors’ through its controversial ‘Expert Review’ feature. The lawsuit, filed in federal court, represents a significant escalation in the ongoing debate about AI companies’ use of personal identities without consent.

Grammarly Lawsuit Centers on Unauthorized AI Impersonation

Grammarly released its ‘Expert Review’ feature last week, promising premium users AI-generated feedback that simulated editorial critiques from notable figures including novelist Stephen King, scientist Carl Sagan, and tech journalist Kara Swisher. However, the company failed to secure permission from any of the hundreds of experts whose names and professional identities it utilized. This oversight has triggered immediate legal consequences and widespread criticism across the journalism and technology communities.

The class action lawsuit specifically alleges violations of privacy and publicity rights under both state and federal law. According to court documents, Grammarly’s actions constitute unauthorized commercial use of personal identities for profit. The feature was exclusively available to users paying $144 annually, creating a direct commercial benefit from the unauthorized use of expert names and reputations.

Julia Angwin’s Career-Long Privacy Advocacy

Julia Angwin, the lead plaintiff in the case, brings particular credibility to the lawsuit given her extensive career investigating technology companies’ impacts on privacy. As a Pulitzer Prize-finalist journalist and former investigative reporter for ProPublica, Angwin has authored multiple books on digital surveillance and data privacy. Her statement regarding the lawsuit highlights the personal and professional violation she experienced.

“I have worked for decades honing my skills as a writer and editor,” Angwin stated. “I am distressed to discover that a tech company is selling an imposter version of my hard-earned expertise.” This sentiment reflects broader concerns among creative professionals about AI systems appropriating their identities without compensation or consent.

AI Ethics Experts Also Targeted Without Consent

The scope of Grammarly’s unauthorized use extends beyond journalists to include prominent AI ethicists and researchers. Timnit Gebru, renowned for her work on algorithmic bias and AI ethics, was included in the ‘Expert Review’ feature without her knowledge or approval. This inclusion creates particular irony given Gebru’s extensive public criticism of unethical AI practices and her advocacy for responsible AI development.

Other affected individuals include Casey Newton, founder and editor of Platformer, who discovered his inclusion when testing the feature. Newton fed one of his own articles into the tool and received feedback from Grammarly’s approximation of Kara Swisher. The generic nature of the feedback raised questions about the feature’s fundamental value proposition.

Grammarly’s imitation of Swisher produced feedback so nonspecific that it failed to demonstrate any meaningful understanding of Swisher’s actual editorial style or expertise. The generated question—”Could you briefly compare how daily AI users versus AI skeptics articulate risk, creating a through-line readers can follow?”—was criticized as generic and lacking the incisive quality characteristic of Swisher’s actual work.

Industry Reactions and Legal Precedents

The lawsuit emerges against a backdrop of increasing legal scrutiny of AI companies’ practices. Recent court decisions have begun establishing boundaries for AI training data usage and identity appropriation. The Grammarly case represents one of the first major challenges specifically focused on AI impersonation of living individuals for commercial purposes.

Legal experts note that right of publicity laws, which vary by state but generally protect individuals from unauthorized commercial use of their identity, may provide strong grounds for the plaintiffs. These laws have traditionally applied to celebrity endorsements but are increasingly being tested in digital contexts.

Key Figures in Grammarly ‘Expert Review’ Controversy
Individual Profession Status in Feature
Julia Angwin Investigative Journalist Lead Plaintiff
Kara Swisher Tech Journalist Unauthorized Use
Timnit Gebru AI Ethicist Unauthorized Use
Casey Newton Platformer Editor Unauthorized Use
Stephen King Novelist Unauthorized Use

Grammarly’s Response and Feature Removal

Following the lawsuit filing and mounting public criticism, Grammarly has disabled the ‘Expert Review’ feature. Superhuman CEO Shishir Mehrotra announced the removal via LinkedIn, offering an apology while continuing to defend the underlying concept. Mehrotra’s statement attempted to reframe the controversy while acknowledging the execution flaws.

“Imagine your professor sharpening your essay, your sales leader reshaping a customer pitch, a thoughtful critic challenging your arguments, or a leading expert elevating your proposal,” Mehrotra wrote. “For experts, this is a chance to build that same ubiquitous bond with users, much like Grammarly has.”

This defense has been met with skepticism from affected individuals and industry observers. Critics argue that the fundamental issue isn’t the concept’s execution but rather the basic ethics of using personal identities without consent. The apology’s conditional nature—defending the idea while regretting the implementation—has done little to assuage concerns.

Technical Implementation and Quality Concerns

Beyond the ethical and legal issues, technical analysis of the ‘Expert Review’ feature reveals significant quality concerns. The AI-generated feedback consistently failed to capture the distinctive voices or expertise of the individuals it purported to emulate. Instead, it produced generic writing advice that could have been generated without referencing specific experts.

This raises questions about why Grammarly chose to use real names rather than creating fictional expert personas or generic categories. Industry analysts suggest the company may have believed that name recognition would drive premium subscriptions, underestimating the legal and ethical implications of this approach.

The feature’s technical limitations become particularly apparent when comparing its output to actual editorial feedback from the referenced experts. Real editorial critiques typically demonstrate deep subject matter expertise, distinctive voice, and contextual understanding—qualities the AI system failed to replicate meaningfully.

Broader Implications for AI Industry Practices

The Grammarly lawsuit represents a potential turning point for AI ethics and regulation. As AI systems become increasingly capable of simulating human expertise and identity, legal frameworks struggle to keep pace with technological developments. This case may establish important precedents regarding:

  • Consent requirements for using personal identities in AI systems
  • Commercial boundaries for AI-generated impersonations
  • Compensation frameworks for individuals whose expertise trains AI models
  • Transparency standards for AI features that reference real people

Industry observers note that similar issues are emerging across multiple AI applications, from voice synthesis to digital avatars. The Grammarly case provides a concrete example of how these abstract ethical concerns manifest in real products affecting real people.

Historical Context of Technology and Identity Rights

The current controversy continues a long history of technological innovation outpacing legal and ethical frameworks. Similar debates emerged with photography in the 19th century, television advertising in the mid-20th century, and internet privacy in the early 21st century. Each technological leap required society to renegotiate boundaries around personal identity and commercial use.

What distinguishes the current AI era is the scale and sophistication of identity appropriation. Unlike previous technologies that might use a name or image, AI systems can simulate entire patterns of thought, communication style, and expertise. This creates fundamentally new challenges for existing legal frameworks designed for simpler forms of identity use.

Conclusion

The Grammarly lawsuit over its AI ‘Expert Review’ feature represents a critical test case for AI ethics and identity rights in the digital age. As the class action proceeds through the legal system, it will likely establish important precedents regarding consent, compensation, and commercial boundaries for AI systems that reference or simulate real individuals. The case highlights growing tensions between AI innovation and personal rights, with implications extending far beyond Grammarly to the entire technology industry. Ultimately, this legal challenge may force clearer standards for how AI companies can ethically incorporate human expertise and identity into their products.

FAQs

Q1: What exactly is Grammarly being sued for?
Grammarly faces a class action lawsuit for using hundreds of experts’ names in its AI ‘Expert Review’ feature without obtaining their consent, allegedly violating privacy and publicity rights by commercially exploiting their identities.

Q2: Who is leading the lawsuit against Grammarly?
Investigative journalist Julia Angwin is the lead plaintiff, filing on behalf of herself and other affected individuals whose names were used without permission in Grammarly’s premium AI feature.

Q3: Has Grammarly responded to the lawsuit?
Yes, Grammarly has disabled the ‘Expert Review’ feature and CEO Shishir Mehrotra has issued an apology, though he continued to defend the underlying concept of the feature while acknowledging implementation failures.

Q4: What legal principles does this case involve?
The case centers on right of publicity laws, which protect individuals from unauthorized commercial use of their identity, and privacy rights that prevent commercial exploitation of personal attributes without consent.

Q5: How might this lawsuit affect other AI companies?
The outcome could establish important precedents for consent requirements and commercial boundaries when AI systems reference or simulate real people, potentially affecting numerous AI applications beyond writing assistants.

Q6: What was the quality of Grammarly’s AI-generated expert feedback?
According to tests by affected journalists, the feedback was generic and failed to capture the distinctive expertise or editorial style of the referenced individuals, raising questions about the feature’s fundamental value.

This post Grammarly Lawsuit Explodes as AI ‘Expert Review’ Faces Class Action Over Unauthorized Impersonation first appeared on BitcoinWorld.

Piyasa Fırsatı
Humans.ai Logosu
Humans.ai Fiyatı(HEART)
$0.0007248
$0.0007248$0.0007248
-0.23%
USD
Humans.ai (HEART) Canlı Fiyat Grafiği
Sorumluluk Reddi: Bu sitede yeniden yayınlanan makaleler, halka açık platformlardan alınmıştır ve yalnızca bilgilendirme amaçlıdır. MEXC'nin görüşlerini yansıtmayabilir. Tüm hakları telif sahiplerine aittir. Herhangi bir içeriğin üçüncü taraf haklarını ihlal ettiğini düşünüyorsanız, kaldırılması için lütfen crypto.news@mexc.com ile iletişime geçin. MEXC, içeriğin doğruluğu, eksiksizliği veya güncelliği konusunda hiçbir garanti vermez ve sağlanan bilgilere dayalı olarak alınan herhangi bir eylemden sorumlu değildir. İçerik, finansal, yasal veya diğer profesyonel tavsiye niteliğinde değildir ve MEXC tarafından bir tavsiye veya onay olarak değerlendirilmemelidir.

Ayrıca Şunları da Beğenebilirsiniz

Trump Wants Rate Cuts Now — The Iran War and Oil Prices Say Otherwise

Trump Wants Rate Cuts Now — The Iran War and Oil Prices Say Otherwise

TLDR Trump posted on Truth Social demanding Fed Chair Powell cut rates “immediately” rather than wait for next week’s FOMC meeting. Markets have priced out most
Paylaş
Coincentral2026/03/13 15:54
UK GDP arrives at 0% MoM in January vs. 0.2% expected

UK GDP arrives at 0% MoM in January vs. 0.2% expected

The post UK GDP arrives at 0% MoM in January vs. 0.2% expected appeared on BitcoinEthereumNews.com. The UK Gross Domestic Product (GDP) arrived at 0% MoM in January
Paylaş
BitcoinEthereumNews2026/03/13 15:59
Egrag Crypto: XRP Could be Around $6 or $7 by Mid-November Based on this Analysis

Egrag Crypto: XRP Could be Around $6 or $7 by Mid-November Based on this Analysis

Egrag Crypto forecasts XRP reaching $6 to $7 by November. Fractal pattern analysis suggests a significant XRP price surge soon. XRP poised for potential growth based on historical price patterns. The cryptocurrency community is abuzz after renowned analyst Egrag Crypto shared an analysis suggesting that XRP could reach $6 to $7 by mid-November. This prediction is based on the study of a fractal pattern observed in XRP’s past price movements, which the analyst believes is likely to repeat itself in the coming months. According to Egrag Crypto, the analysis hinges on fractal patterns, which are used in technical analysis to identify recurring market behavior. Using the past price charts of XRP, the expert has found a certain fractal that looks similar to the existing market structure. The trend indicates that XRP will soon experience a great increase in price, and the asset will probably reach the $6 or $7 range in mid-November. The chart shared by Egrag Crypto points to a rising trend line with several Fibonacci levels pointing to key support and resistance zones. This technical structure, along with the fractal pattern, is the foundation of the price forecast. As XRP continues to follow the predicted trajectory, the analyst sees a strong possibility of it reaching new highs, especially if the fractal behaves as expected. Also Read: Why XRP Price Remains Stagnant Despite Fed Rate Cut #XRP – A Potential Similar Set-Up! I've been analyzing the yellow fractal from a previous setup and trying to fit it into various formations. Based on the fractal formation analysis, it suggests that by mid-November, #XRP could be around $6 to $7! Fractals can indeed be… pic.twitter.com/HmIlK77Lrr — EGRAG CRYPTO (@egragcrypto) September 18, 2025 Fractal Analysis: The Key to XRP’s Potential Surge Fractals are a popular tool for market analysis, as they can reveal trends and potential price movements by identifying patterns in historical data. Egrag Crypto’s focus on a yellow fractal pattern in XRP’s price charts is central to the current forecast. Having contrasted the market scenario at the current period and how it was at an earlier time, the analyst has indicated that XRP might revert to the same price scenario that occurred at a later cycle in the past. Egrag Crypto’s forecast of $6 to $7 is based not just on the fractal pattern but also on broader market trends and technical indicators. The Fibonacci retracements and extensions will also give more insight into the price levels that are likely to be experienced in the coming few weeks. With mid-November in sight, XRP investors and traders will be keeping a close eye on the market to see if Egrag Crypto’s analysis is true. If the price targets are reached, XRP could experience one of its most significant rallies in recent history. Also Read: Top Investor Issues Advance Warning to XRP Holders – Beware of this Risk The post Egrag Crypto: XRP Could be Around $6 or $7 by Mid-November Based on this Analysis appeared first on 36Crypto.
Paylaş
Coinstats2025/09/18 18:36