If CIA can hack iPhones, is user data safe after Apple-ChatGPT integration?

admin
20 Min Read

The big question is whether the data of Apple customers using ChatGPT will be safe. Image: REUTERS

Around three years before ChatGPT stormed the tech world, a Silicon Valley firm called Rhombus Power used its generative AI platform Guardian to collect and analyse non-classified data on illicit Chinese fentanyl trafficking for the United States (US) Defence Intelligence Agency (DIA) in 2019.

The operation — called Sable Spear and headed by then DIA-director Brian Drake — found twice the number of companies involved in illegal or suspicious fentanyl business, which was beyond human capacity to analyse.

Almost two years later, Rhombus Power used generative AI to predict the Russian invasion of Ukraine with 80 per cent accuracy four months before the war in 2021.

Similarly, Conflict Forecast, which also uses AI to predict violence anywhere in the world, apparently signalled the June 2023 Wagner ‘coup’ and the 2014 annexation of Crimea.

Realising the importance of AI, which can analyse a voluminous amount of data and produce results and analyses in seconds, the US Intelligence Community (IC) increasingly relies on machine intelligence, especially in national security matters. AI can also be used to predict terrorist attacks by analysing historical data and social media activity of suspects.

Much before AI’s use in detailed profiling and tracking of a person’s movements, activities and behaviour, a massive warrantless domestic surveillance called Stellar Wind was started by the NSA during the George W Bush administration after 9/11.

Section 702 of the Foreign Intelligence Surveillance Act (FISA), 1978, enacted soon after the September 2001 attacks, allowed the NSA to collect phone calls, emails, SMS and other communications of any non-American located outside the US without a warrant. Under Section 702, the IC can target only non-US persons located abroad who are expected to possess, receive or communicate foreign intelligence information.

The flipside of Section 702, however, is that an American communicating with a non-American outside the US is also monitored without a warrant. In April, President Joe Biden signed a Bill extending Section 702 for two years despite the FBI and the NSA misusing the law to search data of Americans.

Apple-OpenAI deal and data privacy

Since Section 702 also allows warrantless surveillance of Americans, their emails, phone calls and messages are at the risk of being exposed to US intel agencies. This aspect of surveillance raises the question of data privacy.

The Apple-Open AI partnership , under which the iPhone maker has integrated ChatGPT into iOS 18, iPadOS 18 and macOS Sequoia as part of a new personalised AI system called Apple Intelligence, has triggered questions regarding data safety.

At the Worldwide Developers Conference 2024, Apple announced that ChatGPT would be integrated into its devices, including Siri. The user is asked before a question, document or picture is sent to ChatGPT, and Siri subsequently presents the answer directly without the need to hop between tools.

“Our unique approach combines generative AI with a user’s personal context to deliver truly helpful intelligence. And it can access that information in a completely private and secure way to help users do the things that matter most to them,” said Apple CEO Tim Cook.

Despite the buzz about the deal and Cook’s assurance, markets reacted cautiously with Apple stock closing down 1.91 per cent at $193.

Could the fear of data breach be the reason for the stock fall?

Apple claimed that the user’s information wouldn’t be sent over the Internet while ChatGPT generates images and predicts text. When a device is connected to an Apple AI server, the connection will be encrypted and the server will delete any user data after the task is finished — not even company employees can see the data.

Apple claims its new Private Cloud Compute will first try to complete an AI task on the device before sending it to cloud services. However, Apple’s most advanced chips will not be able to complete the massive range of AI tasks and Siri might pass them and the user’s data to the company’s servers.

According to Johns Hopkins University’s computer science professor Matthew Green, while modern phone “neural” hardware is improving, “it’s not improving fast enough to take advantage of all the crazy features Silicon Valley wants from modern AI, including generative AI and its ilk. This fundamentally requires servers”.

“But if you send your tasks out to servers in “the cloud” (god using quotes makes me feel 80), this means sending incredibly private data off your phone and out over the Internet. That exposes you to spying, hacking, and the data hungry business model of Silicon Valley,” Green tweeted.

Green raised the most crucial point in data privacy. “As best I can tell, Apple does not have explicit plans to announce when your data is going off-device for to Private Compute. You won’t opt into this. You won’t necessarily even be told it’s happening. It will just happen. Magically.”

OpenAI, co-owned by Microsoft, said, “Privacy protections are built in for users who access ChatGPT — their IP addresses are obscured, and OpenAI won’t store requests. ChatGPT’s data-use policies apply for users who choose to connect their account.”

However, OpenAI CEO Sam Altman had described his dream AI tool to MIT Technology Review as something “that knows absolutely everything about my whole life, every email, every conversation I’ve ever had”.

ChatGPT does save the user’s data, like email, device, IP address and location and conversation history.

According to OpenAI’s privacy policy , it collects personal information the user provides and information it receives automatically.

OpenAI collects personal information related to the user’s account, like name, contact, account credentials, payment card information, and transaction history; file uploads or feedback; name, contact information and the contents of any messages during a communication and social media information if the user interacts with OpenAI’s social media platforms.

OpenAI automatically receives the user’s log data (IP address, browser type and settings, among other things); time zone, country, the dates and times of access; user agent and version, type of computer or mobile device; computer connection; the type of content viewed and name of the device, operating system and browser.

OpenAI can use the user’s personal information to provide, administer, maintain and/or analyse the services; to improve its services and conduct research; to communicate with the user by sending information about its services and events and develop new programs and services.

Apple itself barred employees from using ChatGPT and other AI-powered services in May 2023, according to The Wall Street Journal, regarding concerns about data-handling practices of such AI platforms, either owned or financially backed by rival Microsoft, as it could compromise its proprietary code or other sensitive data.

Intel agencies can hack iPhone data

US and British intelligence agencies are focusing on big companies, like Apple, manufacturing popular cell phones besides targeting terrorists or criminals. In 2010, the NSA and Britain’s GCHQ formed the Mobile Handset Exploitation Team to implant malware on iPhones to secretly access private communications on cell phones, according to NSA whistle-blower Edward Snowden’s documents.

Snowden’s documents provided to The Intercept revealed in March 2015 that the CIA conducted a massive effort to breach iPhones and iPads.

The CIA has been sponsoring the Trusted Computing Base Jamboree since 2006, a year before the first iPhone was launched. In the 2015 Jamboree, held at a Lockheed Martin facility in northern Virginia, Sandia National Laboratories researchers showed how they targeted essential security keys used to encrypt data stored on Apple’s devices.

Sandia researchers also claimed to have created a modified version of Apple’s Xcode, the proprietary software development tool distributed to hundreds of thousands of developers to develop apps sold through Apple’s App Store.

The CIA can also remotely control iPhones and Android sets, more than 8,000 documents released by WikiLeaks in March 2017 showed. One document mentioned by The Intercept showed how the CIA “develops software exploits and implants for high priority target cell phones for intelligence collection”. Moreover, the FBI hacking division, the Remote Operations Unit, is also tasked with discovering iPhone vulnerabilities.

According to the documents, the CIA can “bypass” the encryption by hacking the phones and accessing data stored on any app, including secure messaging apps like WhatsApp and Telegram.

Apple and Google declined to comment to The Intercept.

OpenAI’s privacy policy also raises the data safety angle.

The company may provide the user’s personal information to third parties in “certain circumstances” and “without further notice to you unless required by the law”

Personal information may also be provided to “vendors and service providers, including providers of hosting services, customer service vendors, cloud services, email communication software, web analytics services and other information technology providers” to help OpenAI “in meeting business operations needs and to perform certain services and functions”.

The user’s personal data could be disclosed if OpenAI is involved in “strategic transactions, reorganisation, bankruptcy, receivership or transition of service to another provider”.

The most important part of OpenAI’s privacy policy is that the personal information, including the user’s interaction with government authorities, industry peers or other third parties, could be shared “if required to do so by law or in the good faith belief that such action is necessary to comply with a legal obligation”.

Besides, OpenAI can itself be breached or hacked, which will expose the user’s personal information to hackers.

In response to the question whether OpenAI can be hacked, ChatGPT responds: “Like any organisation with digital infrastructure, OpenAI is potentially susceptible to hacking attempts. They likely employ a variety of security measures to protect their systems and data …. However, no system is entirely immune to hacking attempts…”

Does Apple provide data to US intel agencies?

In June 2023, Russia’s Federal Security Service (FSB) alleged that the US had hacked thousands of iPhones of domestic Russian subscribers and foreign diplomats based in Russia and the former Soviet Union. American hackers had compromised diplomats from Israel, Syria, China and NATO members, it said.

“The FSB has uncovered an intelligence action of the American special services using Apple mobile devices,” the FSB said in a statement.

Eugene Kaspersky, CEO of Moscow-based Kaspersky Lab, alleged that dozens of his employees’ phones were hacked in “an extremely complex, professionally targeted cyberattack”. In a blog post , Kaspersky said the attack dated back to 2019. “As of the time of writing in June 2023, the attack is ongoing.”

The FSB alleged “close cooperation” between Apple and the NSA without providing evidence. “The US intelligence services have been using IT corporations for decades in order to collect large-scale data of Internet users without their knowledge,” Russia’s foreign ministry said.

While Apple denied working “with any government to insert a backdoor into any Apple product and never will”, the NSA declined to comment to Reuters.

The Kommersant newspaper reported in March 2023 that the Kremlin directed officials involved in preparing for Russia’s 2024 presidential election to stop using Apple iPhones because it was concerned that they could be hacked by Western intelligence agencies.

In January 2020, then-US attorney general William Barr blasted Apple for not providing “substantive assistance” in helping the FBI access two iPhones owned by shooter Mohammad Alshamrani, who killed three people at Naval Air Station Pensacola, Florida, on December 6, 2019.

On the one hand, Apple has resisted government requests to create a “backdoor” into the iPhone that would make encrypted information stored on the devices accessible. On the other hand, Apple countered Barr claiming it had provided “gigabytes” of data as part of the probe. Even the FBI, according to WSJ, believed that Apple had provided ” ample assistance “.

According to the latest biannual ‘ Apple Transparency Report : Government and Private Party Requests July 1-December 31, 2022’, Apple received 6,464 device requests and 12,016 requests for specific devices from the US government. The number of device requests where data was provided was 5,296.

If an Apple account is suspected of being used unlawfully, law enforcement may seek details of the customer associated with the account, account connections or transaction details or account content, the report states.

“An emergency request must relate to circumstances involving imminent danger of death or serious physical injury to any person. If Apple believes in good faith that it is a valid emergency, we may voluntarily provide information to law enforcement on an emergency basis.”

Usually, Apple notifies the customer if his/her data is sought. However, the customer will not be notified “if explicitly prohibited by the legal process, by a court order Apple receives or by applicable law”.

CIA’s AI interest and OpenAI Board link

The CIA has a secretive venture capital firm that funds US companies involved in cutting-edge technology, including AI, biotechnology, communication, data analytics, electronics, Internet of things, information technology, autonomous systems, sensors, robotics, space technology, virtual and augmented reality, physical security and quantum computing.

Initially called Peleus when founded in 1999, the Delaware-incorporated In-Q-Tel (IQT) describes itself as a not-for-profit company that provides early-stage equity funding to start-ups in cutting-edge technology.

Since its inception, IQT has invested in almost 700 companies that have “delivered impactful, mission-critical technologies to its government partners. IQT’s partners include the CIA, CBP, DHS, FBI, the US Cyber Command and the trilateral partnership with US, UK, and Australia”.

Last year, the CIA developed its own generative AI which can be used by all IC’s 18 agencies. Osiris runs on open-source data (unclassified and publicly or commercially available) and its chatbot is like ChatGPT. The CIA hasn’t disclosed whether Osiris will be used on classified networks.

In May, the IC roped in Microsoft to develop a fully secure chatbot — unlike ChatGPT, which depends on cloud services to deduce patterns from data and absorb information — for handling sensitive intelligence data. The generative GPT4 model, which operates without the Internet, will allow the intelligence agencies to analyse top-secret data and develop codes. The AI model is on a special network that’s only accessible by the US government and can’t be breached or hacked.

Microsoft’s engagement with the intel agencies is more than a decade old, according to the Snowden documents, with the tech giant allowing the NSA to circumvent its encryption.

In 2013, Microsoft allowed the NSA easier access under PRISM — a secret project in which the NSA collects Internet communications from American Internet companies, like Google and Apple under Section 702 — to its cloud storage service SkyDrive. The NSA, which already had access to Outlook.com, could also intercept web chats on the portal. The NSA also collected Skype video calls.

Microsoft denied the existence of PRISM and stressed the importance of user privacy though Skype had joined the secretive programme in 2011.

Another concern is the presence of national security experts, including a former CIA clandestine officer, on the OpenAI board of directors and the company’s links to the government. The intel agencies don’t want OpenAI’s breakneck revolution in AI to go out of control.

Will Hurd, a former CIA officer and a Representative from Texas, joined the OpenAI Board in May 2021 and quit in July 2023 due to his 2024 presidential campaign.

In April, Altman joined the Homeland Security’s new federal Artificial Intelligence Safety and Security Board along with the CEOs of other AI giants, including Microsoft, Google and IBM, to ensure that AI works in the national interest.

The big question is whether the data of Apple customers using ChatGPT will be safe considering hacking of iPhones by US intel agencies, the company’s cooperation with spies, the increasing use of AI by spy agencies and the American government’s bid to control AI.

Share This Article
By admin
test bio
Please login to use this feature.