Meta introduces a new AI model which can create video with sound

Meta (the owner of Facebook and Instagram) announced on Friday that it had developed a new artificial-intelligence model called Movie Gen, which can create realistic audio and video clips in response user prompts. It claimed it could rival tools from leading media creation startups like OpenAI or ElevenLabs.

Meta provided samples of Movie Gen’s work, including videos of animals surfing and swimming as well as clips that used real people’s photos to show them doing actions such as painting on canvas.

Meta stated in a blogpost that Movie Gen can also generate background music and audio effects synced with the content of videos. The model can be used to edit videos already created by users.

In one video, Meta used the tool to insert pompoms in the hands of an individual running in the desert. In another, it transformed a parking area where a skateboarder was skating from dry land into a puddle.

Meta stated that Movie Gen videos can last up to 16 second, and audio up to 45 seconds. Meta shared blind test data that showed the model performed well compared to offerings from startups such as Runway, OpenAI ElevenLabs, and Kling.

Microsoft-backed OpenAI first demonstrated in February how and its product Sora can create feature film-like video in response to text instructions.

The entertainment industry is eager to embrace such tools in order to improve and speed up filmmaking. However, others are concerned about systems that seem to be trained to copyright works with no permission.

The lawmakers have also expressed concern about the use of AI-generated fakes or deepfakes in election around the globe, including the US, Pakistan and India.

Meta spokespersons said that the company would not release Movie Gen to developers for free use, as it did with its Llama large-language models. They added that the company considers each model’s risks separately. They declined to comment specifically on Meta’s assessment of Movie Gen.

Meta, instead, said it was working directly with entertainment and other content creators to use Movie Gen, and would integrate it into Meta’s own products by next year.

According to a blogpost by Meta and a paper on the tool, the company built Movie Gen using a mixture of publicly-available and licensed datasets.

OpenAI met with Hollywood executives to discuss potential partnerships for Sora this year, but no deal has yet been announced. In May, Scarlett Johansson claimed that ChatGPT had imitated her voice for its chatbot without her permission.

In September, Lions Gate Entertainment (the company behind The Hunger Games, Twilight and other films) announced that it would give Runway, an AI startup, access to its entire film and TV library in order to train a model. The studio and its filmmakers can then use the model as a tool to enhance their work, according to the statement.

Post Disclaimer

The following content has been published by Stockmark.IT. All information utilised in the creation of this communication has been gathered from publicly available sources that we consider reliable. Nevertheless, we cannot guarantee the accuracy or completeness of this communication.

This communication is intended solely for informational purposes and should not be construed as an offer, recommendation, solicitation, inducement, or invitation by or on behalf of the Company or any affiliates to engage in any investment activities. The opinions and views expressed by the authors are their own and do not necessarily reflect those of the Company, its affiliates, or any other third party.

The services and products mentioned in this communication may not be suitable for all recipients, by continuing to read this website and its content you agree to the terms of this disclaimer.