Researchers In NTU Singapore Reveal New Methods To Track Human Movement In Metaverse
Illustration of the Nanyang University of Technology campus in Singapore (photo: x @NTUsg)

JAKARTA - A research team from the Nanyang University of Technology in Singapore recently introduced a new method to track human movements for the metaverse.

One of the main features of the metaverse is the ability to represent real-world objects and people in the digital world in real-time. In virtual reality, for example, users can turn their heads around to change the point of view or manipulate physical controllers in the real world to influence the digital environment.

The status quo for capturing human activity in the metaverse uses device-based sensors, cameras, or a combination of both. However, as researchers wrote in their pre-print research papers, these two modalities have direct limitations.

The device-based sensor system, such as a handheld controller with motion sensors, "only captures information at one point of the human body so it cannot model very complex activity," the researchers wrote. Meanwhile, camera-based tracking systems are struggling with low light environments and physical barriers.

Enter The WiFi Sensorship

Scientists have used WiFi sensors to track human movements for years. Similar to radar, radio signals used to send and receive WiFi data can be used to detect objects in space.

WiFi sensors can be reset to capture heart rate, track breathing and sleep patterns, and even detect people through walls.

Metaverse researchers have been experimenting with combining traditional tracking methods with WiFi censorship with previously varying success rates.

Entering Artificial Intelligence

WiFi tracking requires the use of artificial intelligence models. However, training these models has proven to have a high level of difficulty for researchers.

"The existing solutions use Wi-Fi modalities and the vision relies on bulk labeled data that are very troublesome to collect. [...] We propose a new unsupervised multimodal HAR solution, MaskFi, which only utilizes unlabeled videos and Wi-Fi activity data for model training," the researcher wrote in his paper.

To train the models needed to experiment with WiFi censorship for HAR, scientists must build a training data library. Data sets used to train artificial intelligence can contain thousands or even millions of data points depending on specific model goals.

Often, the labeling of this data set can be the most time-consuming part of doing this experiment.

Enter The MaskFi

A team from Nanyang University of Technology built a "MaskFi" to address these challenges. It uses artificial intelligence models that are constructed using a method called "unsupervised learning".

In the unsupervised learning paradigm, an artificial intelligence model was trained previously on a much smaller data set and then passed the iteration to the point of being able to predict the state of the output with a satisfactory level of accuracy. This allows researchers to focus on the model itself rather than the time-consuming effort to build a robust training data set.

According to the researchers, the MaskFi system reaches about 97% accuracy across two related benchmarks. This suggests that this system, through future development, can serve as a catalyst for the completely new metaverse modality: a metaverse that can provide a real-world representation of 1:1 in real-time.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)

관련 뉴스