Don't Show Again Yes, I would!

Can you Trust Apple Intelligence with your Data?

Last week, Apple announced updates to all of its operating systems at WWDC, and unveiled Apple Intelligence. Because, of course, they had to brand a technology. Anyways, Apple Intelligence will bring numerous features to macOS, iOS, and iPadOS this Fall in partnership with ChatGPT. But the real question is, how safe is your data?

For years, Apple has been bragged about not collecting its users’ data. And how its competition – mainly Google – collects almost all of its user’s data. But with AI, Apple is going to need to collect some data and take it off the device. While Apple is looking to do most of the AI work on devices, some of the heavy lifting will need to be done via the cloud.

Sticking with that privacy stance, Apple has done something interesting for cloud-powered AI features. It’s called “Private Compute Cloud”. This is going to allow Apple Intelligence to process complex user requests with groundbreaking privacy, according to Apple’s Senior Vice President of Software Engineering, Craig Federighi.

Apple says that they have “extended iPhone’s industry-leading security to the cloud, with what we believe is the most advanced security architecture ever deployed for cloud AI at scale. Private Cloud Compute uses your data to fulfill your request and never stores it, making sure it’s never accessible to anyone, including Apple.”

Can you Trust Apple Intelligence with your Data?

With this, Apple isn’t just talking out of its butt, it’s inviting independent experts to verify the code for Private Compute Cloud.

That sounds cool and all; however, what about ChatGPT? Apple is not only unveiling its own suite of AI features but also includes ChatGPT integration here. So how protected will your privacy be when Apple Intelligence rolls out with iOS 18 later this year?

See also  Reddit locks down its public data in new content policy, says use now requires a contract

Will ChatGPT protect my data as much as Apple?

The parent company of ChatGPT, OpenAI, is not particularly well-known for protecting users’ data. In fact, they’ve been in the news a lot for scrapping data from other websites – particularly news websites. So, can you feel safe with your data being in the hands of OpenAI on your iPhone?

Keep in mind what ChatGPT is being used for with Apple Intelligence. It won’t necessarily or automatically have access to your personal details. ChatGPT is mostly being used to make Siri smarter. In a demo shown off during WWDC earlier this month, Apple shows that before Siri uses ChatGPT, it will ask you for permission.

Additionally, as part of its agreement with Apple, OpenAI had to make a significant concession. That is, OpenAI will not store any prompts from Apple users or to collect their IP addresses. Obviously, that is only going to be from using ChatGPT through Siri and not the ChatGPT app that is already available on the iPhone.

ChatGPT image 1821981291821ChatGPT image 1821981291821

How did Apple train its AI?

While it seemed like Apple was pretty late to the party with artificial intelligence, they’ve actually been using it in their software for many years already. Earlier this year, they open-sourced their own LLM (Large Language Model) called OpenELM. But the real question is, with all of these privacy concerns, what data did Apple use to train their AI?

According to a technical document that Apple released this week, its models are trained “on licensed data, including data selected to enhance specific features.” This means that Apple is not scrapping the entire internet, as OpenAI has been accused of doing, and Google definitely is since it is the largest search engine.

See also  Apple Lists Products Eligible for Sales Tax-Free Holidays in 8 U.S. States

Apple says that “we never use our users’ private personal data or user interactions when training our foundation models.” The company continued by stating that “we apply filters to remove personally identifiable information like social security and credit card numbers that are publicly available on the internet.”

On the flip side, Apple has admitted to scraping some of the internet for data that went into training its proprietary models. However, Apple has not said what web-based information it has ingested. Apple has also confirmed that publishers can add code to their websites to prevent Apple’s web crawler from collecting their data.

Can you trust Apple Intelligence with your data?

Now, for the million-dollar question. Can you actually trust Apple Intelligence with your data? As we’ve gone over, ChatGPT is not able to store your prompts or your IP addresses. So it’s almost like using Snapchat for AI. With Apple’s own AI features, they are using the Private Compute Cloud to secure your data. So, can you trust Apple Intelligence?

At this point, I’d say you can. Apple has already gone the extra mile to make your data more secure, especially when compared to other AI companies out there.

Of course, at this point, this is all about what Apple has said, and not what they have done. We’ll have to wait until later this year when Apple Intelligence actually rolls out to see some independent experts analyze the code and see if it is actually as secure as Apple is promising. But so far, it’s looking pretty good.

See also  Apple highlights AI features, including M4 neural engine, at iPad event

Source Link Website


John Smith

My John Smith is a seasoned technology writer with a passion for unraveling the complexities of the digital world. With a background in computer science and a keen interest in emerging trends, John has become a sought-after voice in translating intricate technological concepts into accessible and engaging articles.

Leave a Reply

Your email address will not be published. Required fields are marked *

fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp fyp