Meta Adds Multimodal AI and Bing Search to Ray-Ban Glasses
Meta trial model AI multimodal in Ray-Ban (photo: dock. Meta)

JAKARTA – Some time ago, Meta's Ray-Ban smart glasses went viral on social media. In the midst of the high interest in Ray-Ban, Meta decided to add the latest features.

Announced on Tuesday, December 12, Meta said that the new feature they will add is a multimodal Artificial Intelligence (AI) model. Different from ordinary AI, this AI model is able to process, understand and output data.

With multimodal AI-powered features, Ray-Ban will understand and process all commands spoken by the user through the internal camera. In addition, this feature can write or describe an object.

"You can ask Meta AI to write a caption for photos taken while climbing or you can ask Meta AI to describe the object you are holding," said Meta in its official release.

It doesn't stop there, Meta said that they are working with Microsoft Bing to retrieve information in real-time. With the addition of Bing, users can find out many things such as sports scores, restaurant information, stocks, and much more.

To take advantage of Bing support, users just need to say, “Hey Meta, who won this year's Boston Marathon in the men's division?” Once the user speaks, Ray-Ban will immediately search for information in real-time.

This multimodal AI feature is still being tested by Meta. Even though Ray-Ban users can't get this feature yet, they can try it directly through the initial program that Meta launched.

All Ray-Ban customers have the right to try and provide feedback on the latest features of these smart glasses. However, Meta warns that this new feature may not work optimally because it is still being tested.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)