Partager:

JAKARTA - Meta has launched a new breakthrough in Artificial Intelligence (AI) by releasing a Segment Anything Model (SAM) which could have a major impact on AI's future.

"We are releasing our generalized Segment Anything Model and our Segment Anything 1-Billion mask (SA-1B) data set, the largest segmentation data set ever, to enable a wide suite of applications and encourage further research into basic models for computer vision," Meta said in its official blog quoted Thursday, April 6.

Trained with various data from SA-1B, the segment Anything can generalize to objects from new images or videos beyond what is observed during training.

As seen in the Meta demo, the segment Anything can identify any fruit in the box.

According to the company, the Segment Anything can help the AI research community and others become components in a larger system for a common multimodal understanding of the world. Such as understanding visual content and web page text.

Likewise in the world of Augmented Reality (AR) and Virtual Reality (VR), the AI Meta model can activate object selection based on user views and then raise it to 3D.

"Segment Anything can be a strong component in domains such as AR/VR, creation content, scientific domains, and more common AI systems," said Meta.

Furthermore, the segment Anything can also be used by content creators to improve creative applications such as extracting image areas for video collage or editing.

In fact, Meta claims the Segment Anything can also help with scientific studies of natural events on Earth or even in outer space, for example by detecting animals or objects for study and tracking in video.

Meta emphasizes their final dataset covers more than 1.1 billion segmented masks collected on about 11 million licensed images and preserving privacy.

SA-1B has 400x more masks than existing segmentation data sets. Images for SA-1B are sourced through photo providers from various countries that reach a variety of geographic areas and income levels.

The previous segmentation model required individuals to guide it through interactive segmentation or training based on a large number of manually annotated objects for automatic segmentation.

But now, Segment Anything is a single model that can easily perform one of the segmentation methods.

That is, practitioners no longer have to collect their own segmentation data and also eliminate the need to perfect models for their use cases, which would save time and energy.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)