Partager:

JAKARTA - Make-A-Video of an AI-powered video generator that can create new video content from Meta's text or image instructions has just been launched.

These tools are similar to existing image synthesis tools such as DALL-E and Stable Diffusion, they can create variations of existing videos.

As the name implies, Make-A-Video is a new AI system where users can convert text instructions into short and high-quality video clips.

In its announcement, Meta shows examples of videos produced from text, such as when a young couple walks in the pouring rain and a portrait of a bear doll painting.

Functionally, Make-A-Video works in the same way as Make-A-Scene, which last July was launched by Meta, relying on a mix of natural language processing and a generative neural network to turn non-visual leads into images, it's just that it draws content in different formats.

"Our intuition is simple, what is the world like and how it describes in paired text data, and learn how the world moves from surveillanceless video footage," the Meta research team wrote in a research paper published yesterday, quoted from Engadget, Friday, September 30.

Doing so allows the research team to reduce the amount of time it takes to train the Make-A-Video model and eliminate the need for paired text-video data, while maintaining diversity such as aesthetic diversity, fantastic depiction, and others from today's image-making models.

Make-A-Video is also able to take pictures from static sources and annimate them. For example, photos of sea turtles then after processing through the AI model, then they can be seen swimming in short videos.

The main technology behind Make-A-Video and why it comes sooner than some experts think is that it builds an existing job with text-to-image synthesis.

"In all aspects, spatial resolutions, loyalty to text, and quality, Make-A-Video sets a new one in the text-to-video generation, as defined by qualitative and quantitative measures," the researchers said.

Like most AI Meta studies, Make-A-Video is released as an open source project, "We are openly sharing these research and generational AI results with the community for their feedback, and will continue to use our AI framework responsible for perfecting and developing our approach to this growing technology," said Meta CEO Mark Zuckerberg.

Meta hasn't made any announcements about how or when Make-A-Video will be publicly available or who will have access to it. However, the company has provided a registration form that people can fill out if they are interested in trying it in the future.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)