JAKARTA - In an event held on Monday, June 10, Apple CEO Tim Cook announced that Apple is collaborating with OpenAI to include a sophisticated artificial intelligence (AI) model in his voice assistant Siri.
However, in a technical document published by Apple after the event, it was revealed that Alphabet's Google was also the winner in Apple's bid to catch up in the AI sector.
To build the basic AI model, Apple engineers use their own framework software with various hardware, including on-premise graphics processing unit (GPU) and chips available only on Google's cloud called tensor processing unit (TPU).
Google has been developing TPUs for approximately 10 years and has discussed openly two variants of its fifth-generation chip that can be used for AI training. This performance variant of the fifth generation offers competing performance with Nvidia's H100 AI chip. At the annual developer conference, Google announced that the sixth generation will launch this year.
SEE ALSO:
The processor is specifically designed to run AI apps and train models, and Google has built a hardware platform and surrounding cloud computing software.
Apple and Google did not immediately respond to requests for comment on this collaboration. Apple also did not discuss the extent of its dependence on Google's chips and software compared to hardware from Nvidia or other AI vendors.
However, Google's chip usage usually requires clients to buy access through its cloud division, similar to the way customers buy computing time from Amazon.com's AWS or Microsoft's Azure.
The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)