The first music video generated with OpenAI’s unreleased Sora model is here

Discover how companies are responsibly integrating AI in production. This invite-only event in SF will explore the intersection of technology and business. Find out how you can attend here.


OpenAI wowed the tech community and many in media and arts earlier this year — while also raising feathers of traditional videographers and artists — by showing off a new AI model called Sora that makes realistic, high-resolution and smooth video up to 60 seconds per clip.

The tech remains unreleased to the public for now — OpenAI said at the time, back in February 2024, that it was making Sora “available to red teamers to assess critical areas for harms or risks” and a selected small group of “visual artists, designers, and filmmakers.” But that hasn’t stopped some of the initial wave of users from making and publishing new projects with it.

Now, one of OpenAI’s handpicked Sora early access users, writer/director Paul Trillo, who was among the first in the world in March to demo third-party videos made with the model, has created what is being called the “first official music video made with OpenAI’s Sora.

The video was made for indie chillwave band Washed Out and its new single “The Hardest Part.” It is essentially a 4-minute long series of connected, quick zoom shots through different scenes that have all been stitched together to create the illusion of a continuous zoom. Watch it below:

VB Event

The AI Impact Tour – San Francisco

Join us as we navigate the complexities of responsibly integrating AI in business at the next stop of VB’s AI Impact Tour in San Francisco. Don’t miss out on the chance to gain insights from industry experts, network with like-minded innovators, and explore the future of GenAI with customer experiences and optimize business processes.

Request an invite

[embedded content]

On his account on the social network X, Trillo posted that he’d first had the idea for the video 10 years ago, but abandoned it. He also replied to questions from his followers and stated that the video was made from 55 separate clips generated by Sora out of a pool of 700 total, and stitched together in Adobe Premiere.

Separately but relatedly, Adobe recently announced it was looking to add Sora and other third-party AI video generator models into its subscription Premiere Pro software, but no timeline has been set for this integration, so in the meantime, those looking to emulate Trillo’s workflow would have to generate AI video clips in other third-party software such as Runway or Pika (since Sora remains non-public) and then save and import them into Premiere. Not the end of the world, but not as seamless as it could be.

Trillo also posted that used only the model’s text-to-video capabilities, rather than taking still images captured or generated elsewhere and feeding them into the AI to add motion (a popular tactic among artists in the quickly evolving AI video scene).

This example shows the power of Sora in creating media with AI and is a helpful rejoinder to the recently revealed information that another one of the first demo videos made by Canadian creative studio Shy Kids called “Air Head,” featuring a man with a balloon for a head, actually leaned heavily on other VFX and video editing tools such as rotoscoping in Adobe After Effects.

It also shows the continual appetite among some creatives — in music and video — to use new AI tools to express themselves and tell stories, despite many other creatives criticizing the technology and OpenAI in particular as being exploitative and violating copyright of human artists by scraping and training on their prior works without informed consent or compensation.