CityGov is proud to partner with Datawheel, the creators of Data USA, to provide our community with powerful access to public U.S. government data. Explore Data USA

Skip to main content
From Marvel to Mayhem: Sora’s AI Puts Copyright and Ethics to the Test

From Marvel to Mayhem: Sora’s AI Puts Copyright and Ethics to the Test

Sora’s ability to generate hyper-realistic video content using well-known fictional characters, celebrities, and copyrighted material has prompted immediate legal and ethical concern. Users have already begun creating short videos featuring recognizable characters from franchises such as Marvel, Disney, and Warner Bros., without the rights holders’ consent. For example, viral clips circulated in early 2025 featuring a deepfake of Spider-Man dancing in Times Square and a remix of classic Disney scenes reimagined with AI-generated dialogue. These clips, while technically user-generated, directly utilize copyrighted likenesses and intellectual property not licensed for reuse, raising potential violations of U.S. copyright law under the Digital Millennium Copyright Act (DMCA) and related case law like Campbell v. Acuff-Rose (1994), which defines the limits of fair use1.

Initially, OpenAI deployed an “opt-out” policy, allowing content owners to request their material be excluded from training datasets or outputs. However, rights-holders argued that such a framework placed the burden on them rather than the platform, facilitating unauthorized use in the interim. The backlash was swift and vocal. By October 2025, OpenAI reversed course, implementing an “opt-in” model that requires rights-holders to actively grant permission before their content can be used to train or appear in generated outputs2. This shift toward proactive consent reflects growing pressure from creative industries and regulators who view opt-out systems as inadequate to protect intellectual property in generative AI contexts.

The Deepfake Acceleration Problem

Beyond copyright, Sora’s cameo feature introduces another high-stakes issue: deepfake authenticity. By enabling users to insert realistic versions of themselves or others into synthetic video, the app fuels growing concerns about identity manipulation and the erosion of trust in visual media. Deepfakes are no longer confined to fringe internet memes or political hoaxes; they are rapidly becoming tools for misinformation, identity theft, and reputation damage. For instance, in March 2025, police in Madrid investigated a Sora-generated video showing a local councilmember in a fabricated bribery conversation. Though quickly debunked, the clip had already spread across social platforms, damaging public perception and requiring official clarification from the city government3.

The sophistication of AI-generated video now challenges even trained analysts and digital forensic teams. Traditional detection methods, such as pixel analysis or facial landmark tracking, are becoming less reliable as generative models improve. As a result, municipal governments must begin preparing for a communications environment where video evidence can no longer be assumed authentic without verification. This shift has implications for public safety investigations, court proceedings, and public information campaigns, all of which increasingly rely on video as a primary source of truth. The need for verifiable media is no longer theoretical; it is urgent and operational.

Policy Moves Toward Transparency and Control

In response to mounting pressure, OpenAI and other generative AI firms have begun introducing technical and policy safeguards. OpenAI’s updated developer guidelines now include requirements for visible watermarking and metadata tagging in all Sora-generated content. By embedding cryptographic signatures into video files, these tools help verify authenticity and trace content back to its origin4. However, enforcement remains inconsistent, particularly as users can strip metadata or re-encode videos. Municipal IT departments and digital communications teams should monitor these developments closely and consider mandating these features in any city-contracted AI tools or public outreach platforms.

In addition, OpenAI has rolled out a user verification process for those wishing to use Sora's more advanced features, such as cameo insertion and high-fidelity rendering. These steps aim to prevent impersonation by requiring identity confirmation before generating videos with realistic likenesses. While not foolproof, this move represents a step toward responsible deployment. Other platforms, such as Meta and TikTok, are also experimenting with similar measures, suggesting a broader industry trend toward layered authentication5. Municipal leaders should take note: any AI tools adopted locally must be evaluated not just for functionality but also for their transparency, auditability, and compliance with consent protocols.

Action Steps for Municipal AI Governance

City governments, public libraries, school districts, and transit agencies are increasingly exposed to generative AI risks due to their reliance on digital communications and data-driven services. The time to establish local AI governance frameworks is now. First, municipal leaders should require transparency in AI-generated content used in public messaging or services. This includes mandatory watermarking and disclosure statements for all AI-generated audio, video, or images distributed through official channels. These requirements can be codified through local ordinances or procurement specifications for vendors working with city data or communications systems.

Second, agencies should implement audit trails for any generative AI tools procured or developed internally. These logs should track prompts, outputs, user actions, and model versions, enabling internal review and external accountability when needed. Third, consent protocols must be formalized, particularly if likenesses of real individuals are used in training or outputs. This includes employees, residents, and public figures. Some municipal legal departments have begun drafting model consent language and data governance policies to guide these efforts. As AI becomes more embedded in civic technology, these foundational steps are critical to protecting public trust and ensuring lawful, ethical deployment.

Preparing for a Post-Authenticity Information Ecosystem

The rapid evolution of tools like Sora signals a transition to a post-authenticity media environment, where synthetic video is indistinguishable from real footage. Municipal governments must not only adapt to this reality but lead in creating frameworks that preserve truth and protect residents. Coordination with state and federal bodies will be essential, especially as national legislation like the DEEPFAKES Accountability Act gains traction6. However, local jurisdictions can move faster by piloting transparency standards, partnering with universities on AI detection research, and embedding AI education into civic literacy programs.

Local leaders should also consider establishing AI oversight councils composed of technologists, ethicists, community advocates, and legal experts. These bodies can review emerging tools, evaluate procurement decisions, and recommend policy responses to new risks. As seen with Sora, the pace of innovation often outstrips regulation, but cities that act early can build public confidence and resilience. The future of trustworthy digital governance depends on our ability to act with foresight today.

Bibliography

  1. U.S. Supreme Court. Campbell v. Acuff-Rose Music, Inc., 510 U.S. 569 (1994).

  2. OpenAI. “Copyright and Rights Management Update.” October 2025. https://openai.com/blog/copyright-policy-update.

  3. García, Laura. “Madrid Politician Targeted by AI-Generated Deepfake.” El País, March 22, 2025. https://elpais.com/espana/2025/03/22/deepfake-investigation.

  4. OpenAI. “Sora Developer Guidelines.” April 2025. https://openai.com/sora/guidelines.

  5. Meta. “Responsible AI Principles and Verification Practices.” Meta AI Research, August 2024. https://ai.meta.com/responsible-ai-principles.

  6. U.S. Congress. “DEEPFAKES Accountability Act of 2024.” H.R. 4029, 118th Congress. https://congress.gov/bill/118th-congress/house-bill/4029.

More from Artificial Intelligence

Explore related articles on similar topics