Runway Gen-3 AI Videos Are Kinda Blowing My Mind!

98,983
56
Published 2024-06-30
Let's play with Runway Gen-3! The video mentions it's coming in a couple weeks but it's actually now available to everyone! They've also released a new prompting guide here to get the best results: help.runwayml.com/hc/en-us/articles/30586818553107…

Discover More From Me:
🛠️ Explore thousands of AI Tools: futuretools.io/
📰 Weekly Newsletter: www.futuretools.io/newsletter
🎙️ The Next Wave Podcast: ‪@TheNextWavePod‬
😊 Discord Community: futuretools.io/discord
❌ Follow me on X: x.com/mreflow
🧵 Follow me on Instagram: www.instagram.com/mr.eflow

Sponsorship/Media Inquiries: tally.so/r/nrBVlp

#AINews #AITools #ArtificialIntelligence

All Comments (21)
  • @fast4549
    By the time Sora finally becomes available we’ll probably already have an open source version or free one that’s just as good
  • @John43426
    While it's really impressive how far we've come with AI video, I'd really like to see how specific we can get with it. Your prompts were kinda generic. What happens if I want a character to wear very specific clothing? Colours, material, fit? Can it do a coat with mother-of-pearl buttons, or do we get any old buttons? Image Gen looks impressive as long as you do generic portraits of people, but breaks if you do anything complicated and out of the ordinary. Try prompting for a handstand or someone dangling from a tree branch heads down. To be useful for serious production work and not being just a toy, you need to be able to have a granular level of control. I know I'm asking for a lot, since we're only at the beginning stages yet, but it'd be nice if you really stress-tested it, so we know where we're standing at the moment. Still, great work, Matt! 🙂
  • @JhonataCosmo
    I think it will be better when the image to video option becomes available.
  • @SuzyTurner
    The band playing music beneath the ocean was pretty cool! Even the floating microphone looked kinda real lol!
  • Low quality clipart video for people who really don't need to have anything particularly good. It'll do because it doesn't really matter because no ones pay much attention to it anyway. This will be great for those adverts at the side of blog posts that everyones trying hard not to accidentally click on so they only focus on the thing they googled in the first place. Marketeers will love it. As long as it moves, it'll do.
  • @hqcart1
    next time you see a demo, expect that they generated 100s of videos to make 1 video demo!
  • @gileneusz
    comparing this to year-ago generations, it's a big improvement. Next year vid generations should be even better
  • @maxington26
    These funky artefacts heavily remind me of image gen around DALL-E 2, Midjourney 4 and SD 1.5. As we've seen those ironed out and improved in later versions, now we're seeing better hand generation and legible text in the newer versions of those image gens - can we expect similar, along with improvements in temporal consistency, in these video gen models? Pretty excited for the next year, if so. That baguette video blew me away a little bit.
  • @chariots8x230
    That’s cool, but I wish we had more control over the videos created. The videos are starting to look better, but the control is still lacking, which makes these videos not that useful for filmmaking projects. Also, we need ‘Image to Video’ if we want this tool to work with consistent characters, and to create scenes for films. I want to be the director of the film, rather than letting the AI have all of the control over how a scene looks.
  • Gen 3 said ‘not available in your country’. LTX was available today and it was like an interface for the usual crappy vid generators. A waste of time and early signup attention. Great vid as always 👌
  • 17:58 the girl on the water is practically perfect, very difficult to see any problems. Impressive!
  • @bigbadallybaby
    I would like a video on why the AI videos do what they do and what the blockers are in making them more realistic - I mean can the output be put back through an AI that corrects all the issues?
  • @DaxLLM
    The color palette it picks is pretty good. Also I like the contrast. I'd say in about another year we'll be looking at some pretty fantastic looking AI videos from all companies. Thank you Matt for putting in the time to show us all!
  • @jeffg4686
    Matt, have them export a depth map video (as an option - more points) with the video, which could be used to composite in some additional stuff - like some animated 3d objects that you add in.
  • @dylan_curious
    So many of these videos look like they’re beyond CGI, they’re really captivating!
  • I just started using Runway Gen-3 a week ago and so far I've made a zombie video and now a dancing cats video. I think both turned out great. Yes image to video will make Gen-3 easier to use but if you know enough about cinematography and camera shots, and you're not too picky about the details, you can do pretty well with just prompts alone in Gen-3.
  • @chariots8x230
    11:02 Imagine if the man & woman hugging here were ‘consistent characters’. I hope it will become easier to create scenes with ‘multiple consistent characters’.
  • @Miresgaldir
    I feel like alot of the clunk could be resolved in the future with like a critique AI pass of sorts that can critique object perminance and physical interactions