OpenAI, Supports The Use Of AI In Content Moderation With Efficiency Reasons
JAKARTA - OpenAI, creator of ChatGPT, has provided strong arguments for the use of artificial intelligence (AI) in content moderation, saying that this can open up efficiency in social media companies by accelerating the handling of several tiring tasks.
Despite many talks about a generating AI, companies such as Microsoft and Alphabet, owners of Google, have yet to successfully monetize this technology despite billions of dollars in disbursement with the hope that this technology will have a major impact on various industries.
OpenAI, backed by Microsoft, says their latest AI model, GPT-4, could reduce the content moderation process from months to hours and ensure more consistent labeling.
SEE ALSO:
Content moderation can be a tiring task for social media companies like Meta, Facebook's parent company, which works with thousands of moderators around the world to block users from seeing harmful content such as child pornography and images of extreme violence.
"The process (of content moderation) is basically slow and can cause mental pressure on human moderators," OpenAI said, quoted by Reuters. "With this system, the process of developing and adjusting content policies has been cut from months to hours."
Separately, OpenAI CEO Sam Altman said on Tuesday August 15, that the company is not training their AI model with user-generated data.