Here’s How Apple Intelligence Protects Your Privacy

Apple AI logo on dark gold circuitboard Credit: iDrop News / Unsplash+
Text Size
- +

Toggle Dark Mode

One of the biggest reasons to shy away from artificial intelligence systems and chatbots is that they can be data-hungry monsters. OpenAI’s ChatGPT, Google’s Gemini, and others hoover up a ton of information on nearly everything you ask them or submit to them. That’s ostensibly necessary to better train their AI engines, but Apple hopes to set itself apart by taking a very different approach.

By now, you’ve probably already heard about how Apple Intelligence will handle nearly everything on-device. That’s been part of Apple’s machine-learning playbook for years. When it announced iOS 10 in 2016, Apple executives proudly explained how the new image and facial recognition features in the Photos app would be handled entirely on your iPhone, using its leading-edge A10 Fusion chip.

This Limited-Time Microsoft Office Deal Gets You Lifetime Access for Just $39

Sick and tired of subscriptions? Get a lifetime license for Microsoft Office Home and Business 2021 at a great price!

These on-device features have only grown more powerful in the years since, expanding to include Visual Look Up, Personal Voice, and more — all handled entirely by the A-series chips inside your iPhone with no requirement to send anything to external cloud services for processing. Even Siri gave up its cloud roots to come on-device in iOS 15 — at least for newer iPhones that were powerful enough to support it.

That’s now culminated in Apple Intelligence, a powerful new large language model (LLM) capable of running mostly on-device, provided those devices have a sufficiently beefy Neural Engine and enough RAM to handle it. For the iPhone, that’s the iPhone 15 Pro and iPhone 15 Pro Max, as they’re the only models with 8GB of RAM. The more powerful Neural Engine in the A17 Pro chip undoubtedly helps as well.

WWDC24 363

On-device processing is the ultimate in privacy since none of your data ever leaves your device. However, if you read what we said above closely, you’ll see the words “nearly” and “mostly” qualifying those comments.

As powerful as Apple’s silicon is, today’s LLMs are too big and complex to fit into even the latest M4 iPad Pro. Apple knows this and realizes that it needs to go bigger if it wants to deliver class-leading AI features. So, it’s switching to a new play by embracing some server-side processing, but it’s doing this in an impressively secure way through a new architecture known as Private Cloud Compute, ensuring your data remains safe.

What is Private Cloud Compute?

Private Cloud Compute is Apple’s term for its in-house servers that will process Apple Intelligence requests that your iPhone, iPad, or Mac can’t handle on their own.

However, there’s more to this than just a fancy buzzword. On the one hand, Apple is building powerful new servers using M2 Ultra chips, but it’s also building them in such a way as to make it virtually impossible for the company to access any data that forms part of your Siri or other AI requests — even if it wanted to.

WWDC24 367

Apple outlines Private Cloud Compute (PCC) in great detail in a post on its Security Research blog, but the key takeaways are that it uses end-to-end encryption and a design that makes it impossible for data to be retained in any way.

In this case, “end-to-end encryption” doesn’t refer to the entire journey of your requests, but it protects them most of the way while ensuring that your data can only be accessed by the PCC server it’s specifically addressed to.

WWDC24 364

When Apple Intelligence on your device decides that it needs some help from PCC to process a request, it finds a valid and cryptographically certified server. Then it encrypts your request specifically for that server, and that server only, before sending it out. At that point, only the destination PCC server can decrypt the request. Nothing else it passes through in transit can access it, and it will be useless even if it inadvertently lands at another PCC node.

The PCC nodes themselves are hardened using Secure Boot and Code Signing technologies and a Secure Enclave, similar to the one that stores Apple Pay cards and biometrics on your iPhone, to store all the keys used to decrypt requests. The Secure Enclave makes it impossible for decryption keys to be duplicated or extracted, so once a PCC node is put into production, it’s the only device that can ever decrypt an AI request addressed to it.

The Secure Enclave is also used to create randomized keys to encrypt the data volume, which are wiped and regenerated on every reboot. This means the data volume is cryptographically erased every time the PCC node’s Secure Enclave Processor restarts after processing a request. Memory address spaces are also recycled to ensure no data is retained in memory.

The PCC nodes are also designed such that nobody, not even Apple’s most senior hardware engineers, can get privileged access to the servers while they’re running in production mode. There are no remote shells, interactive debugging mechanisms, or Developer Mode settings, and all code is signed so that no new code can be injected. The limited logging available for monitoring purposes is carefully designed and audited to ensure no personal data leaves the node.

Apple also uses a “hardened supply chain” for its PCC hardware, which means that all components are carefully inspected and vetted to ensure nothing has been tampered with. That includes high-resolution imaging of components and tamper switches that are activated after they’re sealed for shipment, followed by extensive revalidation once they reach Apple’s data centers.

Multiple Apple teams and an independent third-party observer monitor and verify this. Only after everything has been certified by all parties involved is the PCC node issued a digital certificate to go into operation. Apple devices will not send data to any PCC nodes that don’t have a valid digital certificate.

Finally, Apple is being remarkably transparent about the entire infrastructure, publishing the measurements of all code running on a PCC in a tamper-proof transparency log, making the logs and software images available for inspection and validation by third-party privacy and security experts, and including the PCC in its Apple Security Bounty program to encourage security researchers and “white hat” hackers to search for flaws.

Where Does ChatGPT Fit In?

As we discussed last week, Apple Intelligence and ChatGPT are two entirely separate things. Apple is not using OpenAI to power anything done on-device or in its Private Cloud Compute infrastructure. Instead, ChatGPT is there to handle things that Apple Intelligence can’t.

Apple will never involve ChatGPT without your express consent. If you ask Siri for something that it can’t answer using Apple Intelligence, or if it thinks that ChatGPT could give you a better answer, it will tell you so and then ask if you want to use ChatGPT. If you decline, that’s it, and no contact is made with OpenAI’s servers.

If you allow ChatGPT to get involved, your request will go to ChatGPT, but that’s still done with as many privacy protections as Apple can manage. Firstly, Apple anonymizes the request before it goes to OpenAI, so there’s no personal information about you. ChatGPT won’t know who made the request or where the response is going. Secondly, OpenAI has agreed that none of this anonymized data will be used to train its AI models.

To be clear, all this only applies if you’re using ChatGPT without an account. Sign in to your ChatGPT account, whether you’re a free or pro user, and all bets are off. At that point, you’ve agreed to share your identity with ChatGPT by using your account.

However, it’s worth noting that while ChatGPT won’t know who you are unless you’re signed in, OpenAI could still glean information from the content of your requests. As with location data and Siri requests, the source of data can be anonymized, but it’s never truly anonymous if it contains decipherable personal information or patterns. If you live alone and anonymous location data shows a person at your home more often than anybody else, chances are that person is you. If you tell Siri your name and address during a request, an Apple employee reviewing that “anonymous” request will hear that the same way they’d hear a request to play a song.

That’s likely one of the big reasons Apple wants to ensure there’s a clear demarcation line between what Apple Intelligence processes and what ChatGPT handles. Whether on your device or in Apple’s Private Cloud Compute, no human will ever see or hear your requests to Apple Intelligence. The same can’t be said for OpenAI.

Sponsored
Social Sharing