JAKARTA Sora, a video maker based on Artificial Intelligence (AI), has been controversial since the beginning of this year. This AI model is accused of using YouTube content for learning, but OpenAI is still reluctant to admit it.
At the Bloomberg Technology Summit, OpenAI COO Brad Lightcap was asked about the controversy over Sora's learning tools. Although Lightcap responded, the answer was long-winded and did not explain whether Sora was indeed using a YouTube video.
It seems, Lightcap has predicted the question. He said that the problem of using data is very important and the public must know where all Sora's learning data came from, but Lightcap did not mention the origin.
"Basically, there needs to be an ID content system for AI that allows content creators to understand the direction of content creation, who trains it, and can participate in creating the content," Lightcap said, quoted from 9to5google.
The OpenAI official also discussed social contracts in content use. Although the explanation does not lead to YouTube, Lightcap says they are looking for ways to benefit from using content on the web.
SEE ALSO:
"It's something we're exploring too, how you actually create social contracts that are completely different from the web, with creators, with publishers, when these models go global," explains Lightcap.
At the end of his explanation, Lightcap concluded that this issue was being noticed by OpenAI. "So, yes, we're looking at this problem, it's very difficult. We don't have all the answers yet."
Lightcap is trying to explain that they don't know for sure whether Sora does use more than a million hours of content on YouTube. This response indirectly shows that OpenAI doesn't know the source of Sora's learning.
The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)