From Marvel to Mayhem: Sora’s AI Puts Copyright and Ethics to the Test

From Marvel to Mayhem: Sora’s AI Puts Copyright and Ethics to the Test

Sora’s ability to generate hyper-realistic video content using well-known fictional characters, celebrities, and copyrighted material has prompted immediate legal and ethical concern. Users have already begun creating short videos featuring recognizable characters from franchises such as Marvel, Disney, and Warner Bros., without the rights holders’ consent. For example, viral clips circulated in early 2025 featuring a deepfake of Spider-Man dancing in Times Square and a remix of classic Disney scenes reimagined with AI-generated dialogue. These clips, while technically user-generated, directly utilize copyrighted likenesses and intellectual property not licensed for reuse, raising potential violations of U.S. copyright law under the Digital Millennium Copyright Act (DMCA) and related case law like Campbell v. Acuff-Rose (1994), which defines the limits of fair use1.

Initially, OpenAI deployed an “opt-out” policy, allowing content owners to request their material be excluded from training datasets or outputs. However, rights-holders argued that such a framework placed the burden on them rather than the platform, facilitating unauthorized use in the interim. The backlash was swift and vocal. By October 2025, OpenAI reversed course, implementing an “opt-in” model that requires rights-holders to actively grant permission before their content can be used to train or appear in generated outputs2. This shift toward proactive consent reflects growing pressure from creative industries and regulators who view opt-out systems as inadequate to protect intellectual property in generative AI contexts.

The Deepfake Acceleration Problem

Beyond copyright, Sora’s cameo feature introduces another high-stakes issue: deepfake authenticity. By enabling users to insert realistic versions of themselves or others into synthetic video, the app fuels growing concerns about identity manipulation and the erosion of trust in visual media. Deepfakes are no longer confined to fringe internet memes or political hoaxes; they are rapidly becoming tools for misinformation, identity theft, and reputation damage. For instance, in March 2025, police in Madrid investigated a Sora-generated video showing a local councilmember in a fabricated bribery conversation. Though quickly debunked, the clip had already spread across social platforms, damaging public perception and requiring official clarification from the city government3.

The sophistication of AI-generated video now challenges even trained analysts and digital forensic teams. Traditional detection methods, such as pixel analysis or facial landmark tracking, are becoming less reliable as generative models improve. As a result, municipal governments must begin preparing for a communications environment where video evidence can no longer be assumed authentic without verification. This shift has implications for public safety investigations, court proceedings, and public information campaigns, all of which increasingly rely on video as a primary source of truth. The need for verifiable media is no longer theoretical; it is urgent and operational.

Policy Moves Toward Transparency and Control

In response to mounting pressure, OpenAI and other generative AI firm

Create an Account to Continue
You've reached your daily limit of free articles. Create an account or subscribe to continue reading.

Read-Only

$3.99/month

  • ✓ Unlimited article access
  • ✓ Profile setup & commenting
  • ✓ Newsletter

Essential

$6.99/month

  • ✓ All Read-Only features
  • ✓ Connect with subscribers
  • ✓ Private messaging
  • ✓ Access to CityGov AI
  • ✓ 5 submissions, 2 publications

Premium

$9.99/month

  • ✓ All Essential features
  • 3 publications
  • ✓ Library function access
  • ✓ Spotlight feature
  • ✓ Expert verification
  • ✓ Early access to new features

More from Artificial Intelligence

Explore related articles on similar topics