San Diego Union-Tribune

STARTUP RUNWAY TAKES NEXT STEP IN AI-GENERATED VIDEO

-

Artificial intelligen­ce has made remarkable progress with still images. For months, services like Dall-E and Stable Diffusion have been creating beautiful, arresting and sometimes unsettling pictures. Now, a startup called Runway AI is taking the next step: AI-generated video.

On Monday, New York-based Runway announced the availabili­ty of its Gen 2 system, which generates short snippets of video from a few words of user prompts. Users can type in a descriptio­n of what they want to see, for example: “a cat walking in the rain,” and it will generate a roughly three-second video clip showing just that, or something close. Alternatel­y, users can upload an image as a reference point for the system as well as a prompt.

The product isn’t available to everyone. Runway, which makes AI-based film and editing tools, announced the availabili­ty of its Gen 2 AI system via a waitlist; people can sign up for access to it on a private Discord channel that the company plans to add more users to each week.

The limited launch represents the most high-profile instance of such text-to-video generation outside of a lab. Both Alphabet’s Google and Meta Platforms showed off their own text-to-video efforts last year — with short video clips featuring subjects like a teddy bear washing dishes and a sailboat on a lake — but neither has announced plans to move the work beyond the research stage.

Runway has been working on AI tools since 2018, and raised $50 million late last year. The startup helped create the original version of Stable Diffusion, a textto-image AI model that has since been popularize­d and further developed by the company Stability AI.

Newspapers in English

Newspapers from United States