Introduction
OpenAI is reportedly building a standalone social app powered by Sora 2, its next-generation text-to-video model. The app presents a vertical, swipeable short-video feed where clips are created by AI rather than uploaded user footage — a radical rethinking of short-form social media that raises fresh safety, copyright and moderation questions.
The project was trialed internally and has drawn attention as rivals race to serve short AI-generated clips.
What the Sora 2 app will look like — a TikTok-style, AI-first feed
According to reporting, the Sora 2 app will mirror the vertical “For You” browsing experience popularised by TikTok: short (about 10-second) clips, swipe navigation and algorithmic recommendations.
Unlike TikTok, however, every piece of content would be produced by OpenAI’s video generator, with users prompted to create and remix AI-generated scenes rather than uploading existing video files. The app reportedly launched internally last week and received strong employee engagement.
Why that matters:

Centering a social network on synthetic content changes moderation and IP dynamics — platforms historically built policies around real user footage and creator uploads. An AI-first feed can scale content production quickly but also risks proliferating deepfakes, hallucinated facts, and unlicensed derivative work unless safeguards are robust.
Copyright approach: opt-out for rights holders, limits on likenesses
Reports indicate OpenAI plans an opt-out approach for copyrighted materials: rights holders will be able to ask that their works not be used for Sora training or generation, but the default could allow inclusion unless opted out.
OpenAI also reportedly plans to block generation of recognizable public figures without permission and to add identity-verification tools so users can authorize use of their likeness. Those policies aim to strike a balance between broad creative capability and legal risk — but they will likely be contested by creators and studios.
Safety and moderation: identity verification, notifications, and limits
Wired’s reporting says the app would notify people when their verified likeness is used — even for drafts — and include safeguards aimed at child safety. OpenAI faces a tricky trade-off: identity verification helps protect individuals from unauthorized deepfakes but can raise privacy concerns and friction for users.
The company is also contending with lawsuits and regulatory scrutiny over training data and content provenance, which will shape the app’s rollout and feature set.
Expert view: opportunity and risk
Industry observers note that a polished, AI-first short-video product could unlock new creativity and discoverability — allowing anyone to produce cinematic clips from text prompts — but warn the technology also amplifies harm vectors.
“If you give people a low-friction way to create realistic synthetic video, you must invest heavily in detection, provenance and user controls,” said a media-policy researcher responding to the reports.
Business logic: why OpenAI might build a social app

For OpenAI, a dedicated Sora 2 app is both a product play and a distribution strategy: owning the platform gives the company control over how synthetic content is created, credited and monetized, and it can showcase the model’s capabilities in a consumer product rather than relying solely on API partners.
It also positions OpenAI more directly against Meta, Google and startup rivals that are folding generative video into feeds. But platform ownership also brings moderation and legal costs at scale.
What to watch next
- Rollout details: whether OpenAI will limit the app to short clips, regional pilots, or invite-only testing.
- Copyright mechanics: how the opt-out will work in practice and whether rights holders accept it.
- Safety tools: identity verification UX, notification thresholds, and automated detectors for harmful content.
- Regulatory response: EU and US lawmakers already scrutinize synthetic media and training data; a social app will draw close attention.
Bottom line: OpenAI’s Sora 2 app, if launched as reported, could reshape short-form social media by making every clip synthetic and instantly remixable.
That novelty promises creative democratization but also concentrates legal, ethical and moderation responsibilities on OpenAI — testing whether an AI company can run a social platform at global scale while satisfying creators, users and regulators.
Frequently Asked Questions
What is the Sora 2 app?
It’s a reported standalone social app from OpenAI that uses the Sora 2 text-to-video model to generate short, vertical AI videos for a swipeable feed.
Will the app let me upload my own videos?
Early reports say Sora 2 focuses on AI-generated clips and does not support uploads from device storage; users create content with prompts.
How will OpenAI handle copyrighted content?
Reports say OpenAI plans an opt-out policy for copyrighted works and will avoid generating recognizable public figures without permission. Rights holders will be notified of the opt-out mechanism.
Will people be notified if their likeness is used?
Yes — Wired reports the app will notify verified users when their likeness appears in a video draft or final clip. Identity verification is part of the proposed safety design.
Is Sora 2 already available to the public?
Not yet. The app was reportedly launched internally and is in early testing; a public launch and formal announcements may follow after legal and safety work.
Author note: I’m a technology reporter summarizing reporting from Wired, Reuters, WSJ and other outlets. This article uses company pages and reporting to outline how a Sora 2 app might work and the legal and safety trade-offs OpenAI will face. I used cautious language for unconfirmed details and will update if OpenAI releases official statements.



1 thought on “OpenAI Readies Sora 2 App — TikTok-Style AI Video Feed, Explained Now”