Emo Robot From Columbia University Can Predict And Imitulate Human Face Expressions Quickly

JAKARTA - If we want to live in a world where we will interact with robots, they must be able to read and respond to our facial expressions in a very fast time. Now, scientists have stepped closer to creating such sophisticated machines.

The Emo robot, built by experts at Columbia University in New York, is the world's fastest humanoid robot in imitating a person's expression.

In fact, he can 'predict' someone's smile by looking for subtle signs on their facial muscles and imitating them so they can effectively smile at the same time.

Amazing videos show robots copying the researchers' facial expressions in real time with creepy precision and incredible speed, thanks to the camera in their eyes.

Emo is the work of researchers at the Columbia University Creative Machines Lab in New York, who presented their work in a new study at Scientific Reports.

"We believe that robots should learn to anticipate and emulate human expression as a first step before developing into more spontaneous and self-contained expressive communication," the researchers said.

Most of the robots developed around the world today - like the British bot Ameca - are being trained to imitate a person's face. But Emo has an additional advantage in 'predicting' when a person will smile so he can smile at almost the same time.

This creates a'more authentic' interaction, similar to humans, between the two. The researchers are heading towards a future where humans and robots can talk and even connect, such as Bender and Fry in 'Fuurama'.

"Imagine a world where interacting with robots feels natural and comfortable like talking to a friend," said Hod Lipson, director of Creative Machines Lab. Researchers think that the nonverbal communication skills of robots have been neglected.

Emo is coated with soft blue silicon skin, but below this layer there are 26 small motors that drive movements similar to humans, similar to muscles on the human face.

There are also high-resolution cameras inside the pupils, which are needed to predict human facial expressions.

To train Emo, the team ran a human facial expression video for robots to observe frame by frame for several hours.

After training, Emo can predict people's facial expressions by observing small changes in their faces as they begin to shape the intention to smile.

Emo can not only imitate someone's smile, but also predicts their smile - meaning they can smile at almost the same time. According to Hu, apart from smiling, Emo can also predict other facial expressions such as sadness, anger, and surprises.

"The predicted expression is not only used to jointly express but can also be used for other purposes in human-robot interactions," he said.

Emo has not been able to make a full range of human expressions because it only has 26 face 'dots' (motors), but the team will 'continue to add' more.

Researchers are now working to integrate verbal communications using large language models such as ChatGPT into Emo.

In this way, Emo should be able to answer questions and have conversations like many other humanoids currently built, such as Ameca and Ai-Da.