In the not-so-distant past of February 2024, OpenAI’s Sora burst onto the scene, leaving everyone in a state of sheer astonishment. The video samples released were so uncannily life-like, it felt like Hollywood might start outsourcing to algorithms. The hype was massive, and the public collectively imagined OpenAI stepping out from its chatbot dominion to conquer the realm of video-generative AI. Sora’s prowess was unmistakable, far outshining anything competitors had to offer, as we so enthusiastically reported earlier.
However, shortly after, Sora vanished into the shadows. While other companies tirelessly cranked out new iterations of their video-generative AI models, Sora seemed to be playing the long game. But now, like a dramatic comeback in a season finale, Sora has returned, not with mere samples, but with actual short films crafted by professionals across diverse fields.
Sora Surpasses Competitors Once Again
Since July 17, OpenAI has treated us to a parade of seven spectacular showcase videos for Sora, each one more dazzling than the last. These videos weren't just eye candy; they demonstrated Sora's versatility beyond filmmaking, extending into online platform content and even architecture. Among these visual delights, two videos particularly caught our collective eye:
First up, Tim Fu, the founder of Studio Tim Fu and a former architect at Zaha Hadid, used Sora to craft a short film envisioning futuristic architecture concepts. The realism in the visuals and the fluidity of the models’ movements were nothing short of mesmerizing. Sora’s ability to render the complex dimensions of the architecture around the models made the scenes feel convincingly lifelike.
“Beyond images and videos, generative visualization serves as a design process. Spatial quality and materiality can be readily explored in unprecedented speeds, allowing architects and designers to focus on the core values of design instead of the production of visuals.”
Tim Fu, Founder and Designer, Studio Tim Fu
Next, we have a creation from Chris Kittrell, the LA-based singer known to many as Baby Alpaca. His music video for the song "Shadows" was generated by Sora, featuring a blend of 113 AI-prompted clips with 20 filmed overlays. The result was stunning. Sora maintained a consistent, lifelike representation of Chris throughout the video, seamlessly integrating lifelike animals that moved with natural grace.
“Sora allowed me to create locations and character actions to tell my narrative in ways that would have been impossible with a crew of one and a limited budget.”
Chris Kittrell, Lead Singer of Baby Alpaca
Now after seeing these two videos generated for real-world use cases by Sora, it's time to turn our attention to the reigning champion of video-generative AI, Runway, and see how their creations compare.
The video you just witnessed was crafted by the acclaimed director and producer, Gabe Michael, who spent a frantic 48 hours bringing it to life using Runway Gen 2. His efforts earned him the Best Art Direction award at Runway’s own GEN:48 competition. However, while the artistic direction here diverged from Sora’s showcases above, one can't help but notice that the environment in Michael's video feels less dynamic and more like a series of static images pasted together, rather than a seamless video captured by a camera.
This static nature stems from Runway Gen 2’s limitation of generating a maximum of 16 seconds of video at a time. In contrast, Sora boasts the capability to generate videos up to a minute long in a single prompt. Runway recently unveiled its Gen 3 Alpha model, promising higher fidelity and enhanced video editing tools. Yet, it only marginally extends the maximum video generation length to 10 seconds, still a far cry from Sora’s impressive minute-long feats.
That said, Sora remains tantalizingly out of reach, with insiders speculating a release date in the last quarter of 2024. As we’ve mentioned before, if Sora can maintain this level of production quality through to its launch, OpenAI is poised to become the undisputed leader in generative AI. Given that OpenAI is currently fine-tuning Sora with input from filmmakers and professionals to establish real-world use cases, there's a strong possibility we’ll see a remarkably capable product upon release. The burning question remains: when will that be?
Comments