Meta Launches New AI Model Movie Gen, Can Result Videos With Sounds
Facebook's parent company, Meta Platform Inc., announced on Friday 4 October that it had developed a new AI model called Movie Gen. This model is capable of making realistic video clips and audio based on user orders. This tool competes with generative media tools developed by leading startups such as OpenAI and ElevenLabs.
In an example shared by Meta, Movie Gen produces video of animals swimming and surfing, as well as videos that use original photos of people to describe them doing activities such as painting on the canvas. Movie Gen is also able to create background music and synchronous sound effects with video content.
🎥 Today we’re premiering Meta Movie Gen: the most advanced media foundation models to-date.
Developed by AI research teams at Meta, Movie Gen delivers state-of-the-art results across a range of capabilities. We’re excited for the potential of this line of research to usher in… pic.twitter.com/NDOnyKOOyq
— AI at Meta (@AIatMeta) October 4, 2024
Today we're premiered Meta Movie Gen: the most advanced media foundation models to-date.
Developed by AI research teams at Meta, Movie Gen delivers state-of-the-art results across a range of capabilities. We're excited for the potential of this line of research to usher in... pic.twitter.com/NDOnyKOOyq
In addition to making new videos, this tool can also be used to edit existing videos. One of the edited videos shows Movie Gen adding pom-pom to the hands of a man running in the desert, while in another video, this tool turns a dry parking lot into a puddle of water while someone is skateboarding.
According to Meta, the video produced by Movie Gen can last up to 16 seconds, while the audio can reach 45 seconds. Blind tests carried out show that Movie Gen's performance competes with products from other startups such as Runway, OpenAI, ElevenLabs, and Kling.
SEE ALSO:
The announcement comes amid a Hollywood debate on how to take advantage of AI's generative video technology. Some technologists in the entertainment industry are enthusiastic about using this tool to speed up filming, while others are concerned about using systems that appear to be trained with copyrighted work without permission.
Meta states that they have no plans to publicly release Movie Gen for developers, in contrast to the big language models of the Llama series that have been released. However, Meta is collaborating directly with the entertainment community and content creators, and Movie Gen is planned to be integrated into Meta products next year.
Meta uses licensed and publicly available dataset mixtures to develop Movie Gen. On the other hand, OpenAI has reportedly discussed with Hollywood executives and agents about possible partnerships for similar products, although no agreement has yet been announced.