Partager:

JAKARTA - Joe Rogan, a famous podcaster, recently became a victim of deepfake, a technology that manipulates videos to make him look like the real person. In the deepfake, Joe Rogan promotes Alpha Grind, a man's libido rating product.

The deepfake video shows Rogan discussing the product with his guest, Professor Andrew D. Huberman, in The Joe Rogan Experience podcast. Rogan mentioned that the product is already widespread on TikTok and is available for purchase on Amazon.

The 28-second deepfake video looks very real, but some parts suggest it was made with artificial intelligence (AI). There are some segments that look unnatural, such as a jump in comments. Professor Huberman also responded to the video on Twitter, saying that deepfakes are fake conversations and they actually never talked about the product.

After the deepfake video went viral, many social media users are concerned about fraud and the spread of misinformation by using deepfakes. Many suggest the need for strict supervision in the use of deepfake technology in advertising.

Some social media users even ask about the legal validity of the deepfake, because using someone's face without permission is a copyright in violation.

Although it looks very real, the deepfake is not the only deepfake video made by others. In 2022, Mark Zuckerberg's deepfake was also released by the advocacy group Demand Progress Action. The deepfake video features Zuckerberg thanking the Democratic leaders for their "services and powerlessness" in dealing with anti-trust legislation. The liberal group plans to use the deepfake video in television ads in New York and Washington, DC.

However, the use of deepfake also threatens security. Tim airing, Director of Cyber Security Research Group at King's College London, said that AI deepfakes - which can create hyper-realistic images and videos about someone - have the potential to undermine democratic institutions and national security.

Bribes said that countries like Russia could leverage the availability of deepfake technology to attack target populations and achieve their foreign policy goals and undermine national security of other countries.

"The potential for AI and deepfakes to affect national security, especially in reducing the level of trust in democratic institutions and organizations, as well as the media, will greatly affect national security policies."


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)