More

    Apple Faces a Tough Task in Keeping AI Data Secure and Private

    Just about every company in tech has jumped on the artificial intelligence bandwagon by now, and Apple is no exception. What makes it a little different is how it plans to handle the data security and privacy issues that come with AI.

    At its recent annual WWDC event, the company unveiled Apple Intelligence, its own flavor of AI, which Apple pledges will set a new standard for AI privacy and security. That’s despite its plans to “seamlessly integrate” OpenAI’s ChatGPT into its products and software. 

    But some security experts say that while they don’t doubt Apple’s intentions, which they note remain uniquely altruistic for the industry, the company has its work cut out for it, and, potentially, a new target on its back.

    Apple’s AI security and privacy promises, as well as its intention to be transparent about how the company plans to use AI technology, are a “step in the right direction,” said Ran Senderovitz, chief operating officer for Wing Security, which specializes in helping companies secure the third-party software in their systems.

    Those promises track with Apple’s longtime focus of minimizing data collection and making a point of not using it for profit, Senderovitz said. That makes the company stand out in a “jungle” of an industry that not only remains unregulated, but also has so far failed to put in place its own set of codes and standards. 

    In contrast to Apple, companies like Meta, Google and others have business models well predating the popularization of AI that are built on the collection, sharing and selling of user data to brokers, advertisers and others.

    But the introduction of AI tools like large language models and machine learning, which have the potential to drive huge progress and innovation, comes with significant privacy and confidentiality issues, Senderovitz said.    

    Putting data into an LLM like ChatGPT “is like telling a friend a secret that you hope they forget, but they don’t,” Senderovitz said. It’s tough to know, or control, where that data goes after that. And even if the entered data is immediately destroyed, what the LLM learned from it lives on. 

    And OpenAI’s widely popular LLM is going to be a big part of Apple Intelligence. Starting later this year, it’ll show up in features like Siri and writing tools, but Apple promises that its users will have control over when ChatGPT is used and will be asked for permission before any of their information is shared. 

    Traditionally, Apple has kept consumer data secure and private by limiting what it collects to the minimum needed for the software or device in question to operate. In addition, the company built its phones, computers and other devices with enough horsepower to keep the processing of sensitive data on the device, instead of sending it to a cloud server somewhere.

    After all, data that’s never collected can’t be lost, stolen or sold. But by its design AI changes that. LLMs need data in order to train and become more powerful, and some AI operations just can’t be done on standard phones and laptops. 

    Craig Federighi, Apple’s senior vice president of software engineering, said during the company’s WWDC keynote event that an understanding of personal context like a user’s daily routine and relationships is essential for AI to be truly helpful, but that has to be done the right way.

    “You should not have to hand over all the details of your life to be warehoused and analyzed in someone’s AI cloud,” Federighi said.

    To ensure that, Apple says, it will still keep as much AI processing as possible on devices. And what can’t be done on a phone or computer will be sent to its Private Cloud Compute system that will allow for greater processing abilities, as well as access to larger AI models.

    The data sent is never stored or made accessible to Apple, Federighi said, adding that just like with Apple devices, independent experts can inspect the code that runs on the servers to ensure that Apple is making good on that promise.

    Keeping the private cloud private

    Josiah Hagen, a top security researcher for Trend Micro, with more than 20 years of AI system experience, doesn’t doubt that Apple will try its best to make good on those promises. And he said the cloud does offer some security advantages — specifically, that its larger size makes it easier to spot anomalies across it and stop potential security threats before they become problems.

    What will be key, he said, is whether Apple will be able to build in controls that can keep the attackers from using the AI to do more than it was intended to with the apps it’s connected to.

    “I think we’ll start to see the hijacking of AI model usage for nefarious purposes,” Hagen said, adding that though cybercriminals could use ChatGPT to dig through piles of stolen data, an army of free, AI-powered bots could do that work faster and cheaper. 

    Hagen also worries about the fact that the tech giant doesn’t use outside companies to help secure its cloud. It can be hard to see the chinks in your security armor when you’ve built it yourself, and an outside perspective can be crucial to finding them before online attackers do, he said. 

    “Apple is not a security company,” Hagen said. “Securing your own ecosystem is hard. You’re going to have blinders whoever you are.” 

    On top of that, after many years of focusing on traditional PC and Windows systems, cybercriminals are now increasingly attacking iOS systems with malware, and there’s no guarantee that Apple’s closed system will keep them out. It’s that closed system model that worries Hagen more than Apple’s connection to ChatGPT.

    Freelance security professionals who hunt for flaws in computer systems and then submit them to companies in exchange for payouts known as bug bounties will become an even more important part of Apple’s defense, he said.

    In regard to privacy, Hagen said it’s possible that legal or cost concerns might eventually prompt Apple to start tweaking privacy practices, sending the company down a slippery slope that ultimately ends with it changing its terms of service to allow for consumer data to be used to train the next version of the AI.

    That’s also a concern for Senderovitz, who said he and his researchers are keeping a close eye on any changes to Apple’s terms and conditions, especially regarding its data-sharing practices with third-party collaborators like OpenAI. Though Apple has been big on promises related to this, he said it’s so far been short on specifics.

    “We’re going to need to see the fine print,” he said. 

    Editors’ note: CNET used an AI engine to help create several dozen stories, which are labeled accordingly. The note you’re reading is attached to articles that deal substantively with the topic of AI but are created entirely by our expert editors and writers. For more, see our AI policy.



    Read the full article here

    Recent Articles

    Related Stories

    Leave A Reply

    Please enter your comment!
    Please enter your name here

    Stay on op - Ge the daily news in your inbox