Apple is making a major move to enhance its artificial intelligence (AI) tech by examining user data locally on devices.
According to the tech giant, the approach will help refine AI models while preserving its longstanding devotion to user privacy.
The new approach, which the tech firm calls on-device data, is designed not to capture or store personal data on its servers. Instead, the firm will let its AI systems learn from contrasting artificial, computer-produced data to actual user content — for example, emails in Apple’s Mail app.
Apple typically trains its own AI models on synthetic data. These are examples of user input, emails, messages, or queries generated by a machine in a way that closely mimics how actual users might write or interact with devices. But synthetic data doesn’t always capture the full complexity of real conversations.
The company wrote, “Our goal is to generate synthetic sentences or emails that are similar enough in topic or style to the real thing so that they can help improve our summarization models — but without Apple walking away with emails from the device.”
Its new system accomplishes this by cross-checking the synthetic data against actual snippets on the user’s device. It then identifies the parts of its artificial data sets that closely resemble real-world ones. This feedback loop improves the AI’s ability to craft better summaries, recaps, and suggestions.
The shift is intended to bolster the firm’s Intelligence
Apple rolls out new AI training tools in iOS 18.5 Beta, but only for opted-in users
Apple’s adjustments will come in the next beta builds of iOS 18.5, iPadOS 18.5, and macOS 15.5. As of this week, developers have begun testing the new beta builds.
However, not all users will automatically enroll in this new data mechanism. The company clarified that only users who have opted into its Device Analytics and Product Improvement settings will participate. These options are available in the Settings app under the Privacy & Security section.
This is not the first time Apple has employed user data to enhance services — but it is arguably the most ambitious. Before, the company used differential privacy, a technique that lets it identify broader trends or patterns of behavior without pointing to any particular user.
Similarly, that same system is used for the Apple’s Genmoji feature, which allows users to create their own emoji. If a lot of users request the same kind of Genmoji — let’s say, “a cat on a skateboard” — Apple can work to optimize its AI to better respond to similar requests. However, the system hides it entirely if someone makes a rare or unique request.
The new update will leverage similar techniques to improve other Apple Intelligence features, such as Image Playground, Image Wand, Visual Intelligence, and Memories Creation.
Through synthetic data and real user input — making doubly sure it never does so at the expense of privacy — the tech firm aims to do this and deliver more intelligent, useful, AI-powered experiences that won’t undermine user trust.
Apple scrambles to catch up in the AI arms race
Apple’s strategy shift comes amid pressure to keep up with AI giants like OpenAI, Google, and Microsoft, whose platforms have shot up while using more aggressive data policies and rolling out features more quickly.
Whereas companies such as OpenAI take advantage of large pools of real-world data scraped from the internet — including websites, books, and even Reddit threads — Apple has played it safe by prioritizing privacy over performance. But now, the difference is becoming increasingly obvious.
The company has faced similar criticism over its current AI tools, which have sometimes produced awkward summaries, incomplete responses, or tone-deaf writing suggestions. Insiders have attributed these deficiencies to Apple’s stringent restrictions on using real data to train its models.
Bloomberg has reported on internal friction within the company’s AI team. Leadership struggles and a lackluster Siri experience led to a shake-up in March. John Giannandrea, who had a hand in Siri and AI development, was demoted. Siri, meanwhile, was put under the control of Mike Rockwell (who oversees the development of the Vision Pro headset) and software chief Craig Federighi.
Now, Apple is banking on a more balanced strategy, marketing the privacy for which it’s famous and embracing smarter artificial intelligence that can hold its own in a rapidly shifting technology landscape.
Big upgrades for Siri will likely be gradual — perhaps not arrive until 2026 — but the firm intends to show off a huge Apple Intelligence demo at its Worldwide Developers Conference (WWDC) in June 2025.
Ultimately, the company expressed confidence that its privacy-first AI model would succeed. In a blog post, the tech company stated that leveraging years of experience with techniques like differential privacy and newer methods such as synthetic data generation could enhance its Intelligence features while preserving user privacy.
Cryptopolitan Academy: Want to grow your money in 2025? Learn how to do it with DeFi in our upcoming webclass. Save Your Spot