Jakarta - Starlink's latest privacy policy has sparked a global debate about the line between artificial intelligence (AI) innovation and the protection of personal data. SpaceX's satellite internet service now allows the use of user data for AI training by default, a move that is considered risky for the privacy of millions of people around the world.
Starlink quietly updated its Global Privacy Policy on January 15, 2026. The policy states that user data can be used to train machine learning and AI models, and shared with service providers and "third-party collaborators", unless users actively choose to refuse or opt out.
This change was first reported by Reuters on January 31, 2026 and marked a significant shift from previous policies, which did not explicitly mention the use of data for AI training. With more than 9 million global users, Starlink's new policy raises serious questions about consent, transparency, and potential data misuse.
"This definitely makes me frown and would cause concern if I were a Starlink user," said Anupam Chander, a law professor of technology from Georgetown University, quoted by VOI from Reuters.
"Often there is actually a completely legitimate use of your data, but there are no clear limits on what type of use the data will be used for," he added.
The momentum of this policy comes at a crucial time for SpaceX. The world's most valuable space company is preparing for a major initial public offering (IPO) in the second half of 2026, which is expected to push SpaceX's valuation to over $1 trillion.
At the same time, SpaceX is also reportedly negotiating to join xAI, Elon Musk's artificial intelligence company that was recently valued at around $230 billion in its latest funding round. If the merger takes place, xAI could potentially gain access to a massive scale of real-world data, including Starlink user communication data.
Starlink's privacy document shows the scale of the data collected is very broad. The data includes user location, credit card information, contact details, IP addresses, to the "communication data" category. This category includes audio and visual information, files shared through the service, and "inferences made from other personal data collected".
However, the new policy does not specify clearly which type of data will be used to train AI, making privacy watchdogs concerned about the gray area in its implementation.
Starlink's policy change reflects broader global tensions between AI development ambitions and obligations to protect individual privacy rights. Amid the race to create increasingly sophisticated algorithms, the need for large amounts of data often clashes with user rights that often do not fully understand how their data is used.
While Starlink chose the approach of using real user data, a different approach emerged from Europe.
On January 31, 2026, market research company Ipsos introduced synthetic data boosting technology, a method of creating synthetic data designed to train AI without exposing the original personal data. This approach is claimed to be able to generate data that is realistic but privacy-safe, in line with strict regulations such as GDPR in the UK and the EU.
Ipsos said the technology was built using tabular diffusion models and a rigorous SURE validation framework. The goal is to enrich a small data sample without increasing the risk of re-identification of individuals.
According to Ipsos, synthetic data boosting allows organizations to accelerate research, reduce fieldwork costs, and remain compliant with data protection rules. Initial demand comes from the consumer, financial, and health sectors, areas with very high data sensitivity.
This approach is also equipped with an audit trail, bias checks, and data origin mapping, making it easier to comply with regulatory reviews such as the Information Commissioner's Office (ICO) in the UK. The technology is being used for concept testing, pricing studies, media planning, credit and churn risk modeling, to health service segmentation.
However, the use of synthetic data is not without risk. Experts warn that excessive reliance on artificial data can disguise real-world changes, create hidden biases, or cause research results to deviate if they are not regularly validated.
Organizations are advised to continue to refresh models with real data samples, monitor potential biases, and keep documentation of data rights, user consent, and validation metrics strictly.
The comparison between Starlink's policy and Ipsos' approach highlights two different paths in facing the challenges of AI and privacy. On the one hand, Starlink chooses the exploitation of large-scale user data that triggers concerns about surveillance and approval. On the other hand, Ipsos encourages privacy-based innovation by minimizing the use of personal data.
Ahead of SpaceX's IPO and potential merger with xAI, the debate over the use of personal data for AI is sure to intensify. Investors, regulators, and consumers are now waiting for answers to one big question: how far can technological innovation go without sacrificing basic rights to privacy.
The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)