Apple Intelligence and the latest version of Siri, which is supported by the large language model (large language model/LLM) training results of Google Gemini, will bring capabilities that are commonly found in various other artificial intelligence tools. This was revealed according to a report citing Apple's internal sources.
These sources confirm that Apple Foundation Models trained using Gemini technology will have basic LLM features, such as answering factual questions and being proactive about user data, such as information in the Calendar app. This finding is not surprising, given that Apple has previously exposed the direction of Apple Intelligence and Siri's more personal development.
A report from The Information said that the Apple model trained by Gemini will be able to run functions that have been rumored since September, including the ability to understand the context of the user's schedule. One example mentioned is proactive notifications, such as reminding users to leave early for the airport based on traffic conditions shown in the Calendar.
Reliable sources also say that the launch of the personal Siri and Apple Intelligence will most likely take place in the spring of 2026, after previously being delayed in 2025. Apple is believed to still be preparing a number of additional features that will be announced at the WWDC 2026 event in June.
One of the new features that is said to be coming later is Siri's ability to remember previous conversations, so that it interacts more like a chatbot. This feature is expected to be introduced with iOS 27 and announced at WWDC. In addition, Apple Intelligence is also rumored to be able to automatically alert users to leave early without having to manually set a "departure time" reminder in the Calendar.
Although Apple's head of marketing, Greg Jozwiak, emphasized that the company did not want to make Siri a chatbot, the ability to remember conversations and interact more naturally is considered a logical evolution of the digital assistant. Likewise, the Calendar feature is considered a development of the existing function.
Other features expected to be present include a more conversational Siri, able to provide knowledge-based answers instead of just links, and more empathetic responses in certain situations. For example, when users express loneliness or need help, Siri can respond with a more appropriate emotional approach.
Beyond that, Siri still maintains classic functions such as controlling smart home devices, making calls, and setting reminders. However, these capabilities will be improved thanks to better contextual understanding, for example in managing incomplete contacts by inferring the user's family relationships.
Gemini is not a replacement for Apple's model
The report also clarifies the relationship between Apple and Google's cooperation. The use of Gemini does not mean that Apple is replacing its AI model with Google's. Gemini is only used as a white-labeled training model that runs on Apple's Private Cloud Compute infrastructure to train Apple Foundation Models.
With this approach, Apple can improve the quality of LLM without having to collect data on a large scale or invest billions of additional dollars in GPUs and servers. All user interactions are still run by Apple technology, both on the device and on Apple's servers, without Google's involvement in everyday use.
A source familiar with the internal testing said the current prototype does not feature the Google or Gemini brands, in line with Apple's approach of keeping full control over the user experience.
This cooperation between Apple and Google is said to be long-term, allowing Apple to request certain adjustments to the Gemini model to meet the needs of Apple Foundation Models.
Impact on ChatGPT
The report assesses that ChatGPT has the potential to be the party most affected by Apple Intelligence's strengthening. However, it is stated that the integration of ChatGPT in Apple Intelligence has so far not been widely used by users and has not provided significant traffic to OpenAI.
Apple is expected to rely less and less on ChatGPT as the capabilities of Apple Foundation Models trained by Gemini increase. However, ChatGPT is not completely abandoned and will still operate side by side, for example for image creation and editing through Siri or Image Playground.
Some functions, such as Visual Intelligence for describing images, could potentially be completely moved to Apple's model in the future or give users a choice of models.
Siri and Apple Intelligence will be more personal in the near future, supported by app intent systems. The combination of Gemini-trained Apple Foundation Models and the ability to call ChatGPT when needed is said to be a powerful AI collaboration, while still prioritizing privacy, security, and energy efficiency.
Although the report cites Apple employees who know the future plans, the content is considered not to reveal much new information. However, with the launch of major features that are getting closer, the next few months are expected to be an exciting period for Apple users.
The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)