Text generators like ChatGPT and image generators like Stable Diffusion and Dall-E2 are all the rage. However, AI video generators are the next frontier for generative AI. Runway AI, revealed their Gen-2 AI bot that can generate video clips, based on text
Text-to-image AI is now ubiquitous – very commonplace, easy to use, and almost mainstream. However the next frontier for generative AI is text-to-video. Luckily, for tech enthusiasts, this is just around the corner.
The way text-to-video works is pretty simple, and based on a model that we have become very familiar with now – you write a description and an AI model will create a movie in any manner you want. Although most generative AIs can only dream of doing this, an American AI startup called Runway has announced that their generative AI model, one that can make videos out of simple textual cues, is just round the corner.
Runway gives its users a web-based video editor with AI features such as backdrop removal and posture recognition. In February, the business unveiled its first AI video editing model, Gen-1, and collaborated on the open-source text-to-image model Stable Diffusion.
Gen-1 was primarily concerned with changing current video footage, allowing users to enter a rough 3D cartoon or shaky smartphone film and add an AI-generated overlay.
Runway provides a web-based video editor with AI features such as backdrop removal and posture recognition. In February, the business unveiled its first AI video editing model, Gen-1, and collaborated on the open-source text-to-image model Stable Diffusion.
For example, in the clip in the tweet below, a video of cardboard packaging is combined with a picture of an industrial plant to create a film suitable for storyboarding or pitching a more polished feature.
In contrast, Gen-2 appears to be more concentrated on creating videos from inception, though there are numerous caveats to consider.
First, Runway’s sample segments are brief, unstable, and far from photorealistic, and second, access is restricted. Users will have to sign up to join a waitlist for Gen-2 via Runway’s Discord, according to Bloomberg News, and a spokesperson for the business, Kelsey Rondenet, announced that Runway will “provide broad access in the coming weeks.”
In other words, all we have right now to evaluate Gen-2 is a demo reel and a few excerpts, most of which were already being advertised as part of Gen-1.
Generate videos with nothing but words. If you can say it, now you can see it.
Introducing, Text to Video. With Gen-2.
Learn more at https://t.co/PsJh664G0Q pic.twitter.com/6qEgcZ9QV4
— Runway (@runwayml) March 20, 2023
The team also shared some samples with the press to show what their generative AI model is really capable of.
This is just the beginning. #Gen2 pic.twitter.com/meXhB3p0Eh
— Anastasis Germanidis (@agermanidis) March 21, 2023
Generate videos with nothing but words. If you can say it, now you can see it.
Introducing, Text to Video. With Gen-2.
Learn more at https://t.co/PsJh664G0Q pic.twitter.com/6qEgcZ9QV4
— Runway (@runwayml) March 20, 2023
A couple astronauts landed in a mysterious planet.
Text to video. Gen-2 #nocamera pic.twitter.com/Qxob7d4EdZ
— Alejandro Matamala Ortiz (@matamalaortiz) March 20, 2023
Still, the results are intriguing, and the possibility of text-to-video AI is enticing — promising both new creative possibilities and new threats for misinformation, etc. It’s also worth contrasting Runway’s work with text-to-video research conducted by industry titans like Meta and Google. These businesses work in a more sophisticated manner, as their AI-generated clips are lengthier and more cohesive, but not in a manner that shows their massive resources. Runway, by comparison, is only a 45-person team.
In other words, companies are continuing to do interesting work in generative AI, including uncharted terrain like text-to-video. More to come, AI-generated or not.
Read all the Latest News, Trending News, Cricket News, Bollywood News,
India News and Entertainment News here. Follow us on Facebook, Twitter and Instagram.