The rocky road and pitfalls in GenAI adoption
Oleh Amit Roy Choudhury
Singapore’s AI policymaking acumen may face a test with Apple’s plan to integrate ChatGPT into its operating system, especially in the context of government-issued iPhones.
Successful AI implementation needs good and flexible policy frameworks. Image: Canva
Elon Musk’s recent comment that he would ban iPhones from entering his company premises due to security concerns is likely to have raised a few eyebrows among the folks who look after cybersecurity for the Singapore government.
This is because Apple is one of the preferred vendors for government-issued phones and tablets to employees.
The issue, while minor in the bigger scheme of things, could have wider ramifications on Singapore’s regulatory framework and its objective of becoming a centre of AI innovation.
Musk was reacting to an announcement by Apple that it would integrate OpenAI’s artificial intelligence (AI) engine into the Apple operating system (OS) to create a service called Apple Intelligence.
The OS-level integration would enable, among other features, phone-based access to OpenAI’s generative AI assistant, ChatGPT, through Apple’s default assistant Siri.
These upgrades to the iPhone OS will start in September when Apple’s latest iOS18 is available in beta form. And many of the planned OpenAI-driven AI features are likely to take more than a year to come to the devices, according to Apple.
Musk thinks Apple’s approach of integrating ChatGPT at the OS level is a bad idea as it gives the assistant access to the device's various functions like camera, voice recorder and data making it “an unacceptable security risk.”
Apple Intelligence and its capabilities
According to this report, Apple Intelligence would be able to do things like proofread and suggest what users write in emails, notes, or text, summarise audio recordings and check whether a rescheduled meeting would overlap with a user’s family commitments and other functionalities.
Anyone familiar with ChatGPT would recognise that many of these features are like what is already available as a cloud service (both free and paid) on the OpenAI website. The difference here is that Apple wants all this done at the device level as far as possible. That is why it is looking at an OS-level integration of the AI assistant.
Apple’s agreement with OpenAI includes clauses like processing user requests on the device rather than in a data centre. The service would only be offered from an Apple-controlled cloud service where the data centre uses Apple-made semiconductors.
The plan also envisages diverting requests Apple cannot handle to ChatGPT. There is no reason to doubt Apple’s commitment to security. However, what happens if there is a security breach, and what are the contingencies if this were to happen?
Another potential problem is that anything shared with GenAI tools is retained for further training. You cannot delete something shared with ChatGPT as Samsung found out the hard way.
What the government would consider
While processing data on an Apple-controlled cloud service should be fine for an ordinary Apple device user in terms of security guarantees, from the government perspective there would bound to be questions about the location of these data centres. Would the data go out of Singapore and what risks would be associated with that?
Would or should the government mandate that any backend processing of data in government-issued devices be done only in a Singapore-based data centre?
Another important consideration would be accidental information leakage while using on-device AI tools. Last year, Samsung reported three unintentional leaks of sensitive information by employees using ChatGPT. Samsung subsequently banned the use of ChatGPT.
It is a given that the government will find an acceptable resolution to this potential security conundrum for government-issued Apple devices. However, this incident highlights the rocky road ahead in the use of AI both for individuals and government agencies.
The government is committed to using AI for the betterment of residents and the National AI Strategy calls for Singapore to connect to global networks and work with the best and pool resources to overcome complex challenges that plague AI.
The policy also dictates that the Singapore government will support experimentation and innovation while ensuring the responsible use of AI.
As the government rightly points out, AI's full potential will only be accessible through partnerships with not only like-minded government agencies from different countries but also with private industry.
Misgivings expressed by AI professionals
The problem is that AI is a beast that is hard to tame. While the Apple-OpenAI deal and Musk’s misgivings may be a storm in a teacup, security concerns about AI models being developed by the private industry remain.
Many professionals have publicly expressed misgivings about how the AI industry is developing.
In a recent open letter titled A Right to Warn about Advanced Artificial Intelligence, a group of former and current employees of frontier AI companies, like OpenAI, warn of “serious risks posed by these (AI) technologies.”
According to them, these risks range from “the further entrenchment of existing inequalities to manipulation and misinformation, to the loss of control of autonomous AI system.”
The letter states that AI companies “have a strong financial incentive to avoid effective oversight… AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures and the risk levels of different kinds of harm.”
This point takes us back to Apple’s deal with OpenAI. The limitations of the system, especially concerning security, are something that cannot be ignored.
Crucial role of policy and regulations
This is where policy and regulations will play a crucial role in ensuring there is effective government oversight. The government needs policymakers who understand both the potential and risks posed by AI.
The rules must be flexible and adaptable as this is a fast-evolving industry, and regulations will always play a catch-up role in the speed at which AI technology has been evolving.
While some of the things AI can do may seem almost magical, as a society we are just starting to take baby steps in the AI journey. The disruption that AI will bring about needs not just good regulatory policy but also an effort to inculcate a wider understanding of both the risks as well as opportunities that AI poses.
For example, the next possible iteration of cognitive AI which is being called artificial general intelligence (AGI) could prove to be a tremendously powerful tool to solve all of humanity’s major problems. But AGI would also come with serious risks of misuse, drastic accidents, and societal disruption.
The government’s role will have to transcend policymaking and include educating Singapore residents about both the potential as well as risks of AI.
As with previous iterations of technology, Singapore can be a good testbed on how society and AI can co-evolve. Good policy can ensure that.