Google Introduces A New Feature That Makes It Easier For Users To Find Items With Text And Images
Google has just launched a multisearch feature for its Google Lens.(photo: Doc. Google)

JAKARTA - Google has just launched a multisearch feature for its Google Lens, which allows users to search for images with accompanying text to help improve results.

For example, if a user sees an orange dress model that they like but prefers to find a store that sells it in green, the user can take a picture of an orange dress and write 'green' in the text field to display the results of a dress-like model but in green.

There are various uses for this tool, but overall Google wants to make its user results more relevant. As Google CEO Sundar Pichai said, the new multisearch feature is part of the company's ongoing efforts to use AI.

It aims to create a truly conversational, multimodal and personal information experience.

Currently, the feature is only available in beta for United States (US) users searching with English subtitles. It is also intended for shopping searches. The company has not stated when it will arrive in other countries or for other languages.

To use this feature, users just need to open the Google app, tap the Lens camera icon, then search for a screenshot or take a new picture. Then, swipe up and tap the "+ Add to your search" button to add text.

Launching ZDNet, Friday, April 8, further, Google also said it was exploring ways this feature could be enhanced with the Multitask Unified Model (MUM), Google's latest AI model.

The tech giant recently shared how it uses MUM and other AI models to more effectively convey information about crisis relief to people seeking help.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)