AI misuse is sparking controversies, prompting regulators to pursue use-focused measures, while debates continue over whether current frameworks can keep pace withAI misuse is sparking controversies, prompting regulators to pursue use-focused measures, while debates continue over whether current frameworks can keep pace with

AI In The Creative Industries: Misuse, Controversy, And The Push For Use-Focused Regulation

2026/01/26 17:44
6 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com
AI is quickly reshaping creative practice

AI is quickly reshaping creative practice, but its misuse is proliferating just as fast. Undisclosed AI-assisted writing, voice and likeness cloning, and AI-generated imagery are repeatedly appearing after being published or even awarded, sparking high-profile controversies and eroding trust in cultural institutions.

Regulators and platforms are scrambling to respond with a mix of disclosure requirements, content-labeling proposals, provenance and watermarking standards, and targeted enforcement. Yet the current framework remains patchy, slow, and often unclear. How can lawmakers protect creators and consumers without stifling innovation? Are existing rules even capable of keeping pace with the fast-evolving AI landscape? These questions lie at the heart of one of the most urgent debates in technology and creativity today.

Among the most notable AI controversies of the past few years is Rie Qudan’s Sympathy Tower Tokyo, winner of the 2024 Akutagawa Prize. The author disclosed that roughly 5% of the novel—primarily the responses of an in-story chatbot—was generated using ChatGPT. The revelation ignited debate about authorship and transparency in literature. Critics were divided: some praised the work as an innovative use of AI to explore language and technology, while others viewed it as a challenge to traditional norms of original authorship and literary integrity. Coverage in major outlets emphasized the book’s themes—justice, empathy, and the social effects of AI—and the procedural questions raised by incorporating generative models in prize-winning work, prompting calls for clearer disclosure standards and reconsideration of award criteria. The case has become a touchstone in broader conversations about creative agency, copyright, and the ethical limits of AI assistance in the arts, with lasting implications for publishers, prize committees, and authorship norms.

Another high-profile incident involved Lena McDonald’s Darkhollow Academy: Year Two, where readers discovered an AI prompt and editing note embedded in chapter three. This accidental disclosure revealed that the author had used an AI tool to mimic another writer’s style, sparking immediate backlash and widespread coverage. The occurance highlighted the limits of current publishing workflows and the need for clear norms around AI-assisted writing. It intensified calls for transparency, provoked discussions about editorial oversight and quality control, and fueled broader debates over attribution, stylistic mimicry, and intellectual-property risks in commercial fiction.

In visual arts, German photographer Boris Eldagsen sparked controversy lately  when an image he submitted to the Sony World Photography Awards was revealed to be entirely AI-generated. The work initially won the Creative Open category, prompting debates about the boundaries between AI-generated content and traditional photography. The photographer ultimately declined the prize, while critics and industry figures questioned how competitions should treat AI-assisted or AI-generated entries.

The music industry has faced similar challenges. The British EDM track “I Run” by Haven became a high-profile AI controversy in 2025 after it was revealed that the song’s lead vocals had been generated using synthetic-voice technology resembling a real artist. Major streaming platforms removed the track for violating impersonation and copyright rules, provoking widespread condemnation, renewed calls for explicit consent and attribution when AI mimics living performers, and accelerated policy and legal debates over how streaming services, rights holders, and regulators should manage AI-assisted music to protect artists, enforce copyright, and preserve trust in creative attribution.

Regulators Grapple With AI Harms: EU, US, UK, And Italy Roll Out Risk-Based Frameworks 

The problem of harms from AI use—including cases where creatives pass off AI-generated work as human-made—has become a pressing issue, and emerging regulatory frameworks are beginning to address it.

The European Union’s AI Act establishes a risk-based legal framework that entered into force in 2024, with phased obligations running through 2026–2027. The law requires transparency for generative systems, including labelling AI-generated content in certain contexts, risk assessments and governance for high-risk applications, and empowers both the EU AI Office and national regulators to enforce compliance. These provisions directly target challenges such as undisclosed AI-generated media and opaque model training.

National legislators are also moving quickly in some areas. Italy, for example, advanced a comprehensive national AI law in 2025, imposing stricter penalties for harmful uses such as deepfake crimes, and codifying transparency and human oversight requirements—demonstrating how local lawmaking can supplement EU-level rules. The EU Commission is simultaneously developing non-binding instruments and industry codes of practice, particularly for General Purpose AI, though rollout has faced delays and industry pushback, reflecting the difficulty of producing timely, practical rules for rapidly evolving technologies.

The UK has adopted a “pro-innovation” regulatory approach, combining government white papers, sector-specific guidance from regulators such as Ofcom and the ICO, and principles-based oversight emphasizing safety, transparency, fairness, and accountability. Rather than imposing a single EU-style code, UK authorities are focusing on guidance and gradually building oversight capacity.

In the United States, policymakers have pursued a sectoral, agency-led strategy anchored by Executive Order 14110 from October 2023, which coordinates federal action on safe, secure, and trustworthy AI. This approach emphasizes risk management, safety testing, and targeted rulemaking, with interagency documents such as America’s AI Action Plan providing guidance, standards development, and procurement rules rather than a single comprehensive statute.

Martin Casado Advocates Use-Focused AI Regulation To Protect Creatives Without Stifling Innovation

For creatives and platforms, the practical implications are clear. Regulators are pushing for stronger disclosure requirements, including clear labelling of AI-generated content, consent rules for voice and likeness cloning, provenance and watermarking standards for generated media, and tighter copyright and derivative-use regulations. These measures aim to prevent impersonation, protect performers and authors, and improve accountability for platforms hosting potentially misleading content—essentially implementing the “use-focused” regulatory approach recommended by Andreessen Horowitz’s general partner Martin Casado in the a16z podcast episode.

He argues that policy should prioritize how AI is deployed and the concrete harms it can cause, rather than attempting to police AI model development itself, which is fast-moving, difficult to define, and easy to evade. The venture capitalist warns that overbroad, development-focused rules could chill open research and weaken innovation. 

Martin Casado emphasizes that illegal or harmful activities carried out using AI should remain prosecutable under existing law, and that regulation should first ensure that criminal, consumer-protection, civil-rights, and antitrust statutes are enforced effectively. Where gaps remain, he advocates for new legislation grounded in empirical evidence and narrowly targeted at specific risks, rather than broad, speculative mandates that could stifle technological progress.

According to the expert, it is important to maintain openness in AI development, such as supporting open-source models, to preserve long-term innovation and competitiveness while ensuring that regulatory measures remain precise, practical, and focused on real-world harms.

The post AI In The Creative Industries: Misuse, Controversy, And The Push For Use-Focused Regulation appeared first on Metaverse Post.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.
Tags:

You May Also Like

Sui’s Beep Wallet Unleashes AI Power: Agentic Trading Expands to 300+ Assets

Sui’s Beep Wallet Unleashes AI Power: Agentic Trading Expands to 300+ Assets

BitcoinWorld Sui’s Beep Wallet Unleashes AI Power: Agentic Trading Expands to 300+ Assets In a significant leap for decentralized finance, the Sui blockchain’s
Share
bitcoinworld2026/04/03 02:10
Most Expensive NFT: Record-Breaking Digital Art Sales

Most Expensive NFT: Record-Breaking Digital Art Sales

Discover the most expensive NFT sales in history, from Pak’s "The Merge" to Beeple’s "Everydays." Learn what makes digital art valuable and how to start your NFT
Share
Stealthex2026/04/03 03:19
CME Group to launch Solana and XRP futures options in October

CME Group to launch Solana and XRP futures options in October

The post CME Group to launch Solana and XRP futures options in October appeared on BitcoinEthereumNews.com. CME Group is preparing to launch options on SOL and XRP futures next month, giving traders new ways to manage exposure to the two assets.  The contracts are set to go live on October 13, pending regulatory approval, and will come in both standard and micro sizes with expiries offered daily, monthly and quarterly. The new listings mark a major step for CME, which first brought bitcoin futures to market in 2017 and added ether contracts in 2021. Solana and XRP futures have quickly gained traction since their debut earlier this year. CME says more than 540,000 Solana contracts (worth about $22.3 billion), and 370,000 XRP contracts (worth $16.2 billion), have already been traded. Both products hit record trading activity and open interest in August. Market makers including Cumberland and FalconX plan to support the new contracts, arguing that institutional investors want hedging tools beyond bitcoin and ether. CME’s move also highlights the growing demand for regulated ways to access a broader set of digital assets. The launch, which still needs the green light from regulators, follows the end of XRP’s years-long legal fight with the US Securities and Exchange Commission. A federal court ruling in 2023 found that institutional sales of XRP violated securities laws, but programmatic exchange sales did not. The case officially closed in August 2025 after Ripple agreed to pay a $125 million fine, removing one of the biggest uncertainties hanging over the token. This is a developing story. This article was generated with the assistance of AI and reviewed by editor Jeffrey Albus before publication. Get the news in your inbox. Explore Blockworks newsletters: Source: https://blockworks.co/news/cme-group-solana-xrp-futures
Share
BitcoinEthereumNews2025/09/17 23:55

Trade GOLD, Share 1,000,000 USDT

Trade GOLD, Share 1,000,000 USDTTrade GOLD, Share 1,000,000 USDT

0 fees, up to 1,000x leverage, deep liquidity