By Anna Tong
SAN FRANCISCO – Runway, the startup that co-created the popular Stable Diffusion AI image generator, has released an AI model that takes any text description – such as “turtles flying in the sky” – and generates three seconds of matching video footage.
Citing safety and business reasons, Runway is not releasing the model widely to start, nor will it be open-sourced like Stable Diffusion. The text-to-video model, dubbed Gen-2, will initially be available on Discord via a waitlist on the Runway website.
Using AI to generate videos from text inputs is not new. Meta Platforms Inc and Google both released research papers on text-to-video AI models late last year. However, the difference is that Runway’s text-to-video AI model is being made available to the general public, said Cristobal Valenzuela, Runway’s chief executive.
Runway hopes that creatives and filmmakers will use the product, Valenzuela said.
(Editing by Stephen Coates)
Disclaimer: This report is auto generated from the Reuters news service. ThePrint holds no responsibilty for its content.
Source: The Print