Over the last year, Meta has made its AI assistant so ubiquitous in its apps it’s almost hard to believe that Meta AI is only a year old. But, one year after its launch at the last Connect, the company is infusing Meta AI with a load of new features in the hopes that more people will find its assistant useful.
One of the biggest changes is that users will be able to have voice chats with Meta AI. Up till now, the only way to speak with Meta AI was via the Ray-Ban Meta smart glasses. And like last year’s Meta AI launch, the company tapped a group of celebrities for the change.
Meta AI will be able to take on the voices of Awkwafina, Dame Judi Dench, John Cena, Keegan Michael Key and Kristen Bell, in addition to a handful of more generic voices. While the company is hoping the celebrities will sell users on Meta AI’s new abilities, it’s worth noting that the company quietly phased out its celebrity chatbot personas that launched at last year’s Connect.
In addition to voice chat support, Meta AI is also getting new image capabilities. Meta AI will be able to respond to requests to change and edit photos from text chats within Instagram, Messenger and WhatsApp. The company says that users can ask the AI to add or remove objects or to change elements of an image, like swapping a background or clothing item.
The new abilities arrive alongside the company’s latest Llama 3.2 model. The new iteration, which comes barely two months after the Llama 3.1 release, is the first to have vision capabilities and can “bridge the gap between vision and language by extracting details from an image, understanding the scene, and then crafting a sentence or two that could be used as an image caption to help tell the story.” Llama 3.2 is “competitive” on “image recognition and a range of visual understanding tasks” compared with similar offerings from ChatGPT and Claude, Meta says.
The social network is testing other, potentially controversial, ways to bring AI into the core features of its main apps. The company will test AI-generated translation features for Reels with “automatic dubbing and lip syncing.” According to Meta, that “will simulate the speaker’s voice in another language and sync their lips to match.” It will arrive first to “some creators’ videos” in English and Spanish in the US and Latin America, though the company hasn’t shared details on rollout timing.
Meta also plans to experiment with AI-generated content directly in the main feeds on Facebook and Instagram. With the test, Meta AI will surface AI-generated images that are meant to be personalized to each users’ interests and past activity. For example, Meta AI could surface an image “imagined for you” that features your face.
Catch up on all the news from Meta Connect 2024!