White House advisor: Don't repeat the mistakes of social media with AI

By Yong Shu Chiang

Citing social media regulation as a collective failure and the advent of AI as an opportunity, White House national cyber and tech security advisor Anne Neuberger highlighted the challenge governments face in embracing innovation without overlooking the risks.

Governments have to ensure that the benefits from technological innovation are available broadly, but also need to protect citizens, especially the vulnerable, said White House advisor Anne Neuberger (centre). Image: © 2024 GSMA / MWC

As countries continue to grapple with regulating social media, a White House official has shared that artificial intelligence (AI) represents both the “promise and the peril” of innovation – and an opportunity to do better in protecting citizens.


“We have to recognise that the business of business is doing business, and that the business of government is ensuring that, first, those benefits [from innovation] are available broadly, but also that we protect our citizens from the downside,” said Anne Neuberger, Deputy National Security Advisor, Cyber & Emerging Tech at the National Security Council.


Neuberger, who is also Deputy Assistant to the President in the White House, added that “in many cases [with social media], we have failed as governments to adequately protect children, to adequately protect the vulnerable, and we now have an opportunity as we look at AI-generated deepfakes, for example, to try to correct for that.”


She was speaking at a conference session titled “Building Digital: How Governments are Innovating for Citizens” at last week’s Mobile World Congress (MWC) 2024.

Assess risks, before deploying AI


AI can spur tremendous innovation, in areas such as healthcare, weather prediction and in math.


However, Neuberger said that a “deploy first, check later” approach is risky when it comes to critical services, highlighting this as the reason behind a recent executive order issued by American President Joe Biden to govern the development and use of AI in the United States.


“[The executive order] puts a real focus on assessing risk in those sectors before AI models are rolled out, so we know what can be done and we can benefit from the innovation but in a safe way.”


For instance, sophisticated AI deepfakes jeopardise the integrity of the democratic process, said Neuberger. With elections in the US taking place next year, she highlighted a prior example of a deepfake “voice that sounds very much like a head of state … encouraging people not to vote.”


She added that Department of Commerce has established a US Artificial Intelligence Safety Institute to work with countries and companies around the world and build standards for safety and security, and the testing of AI models.


This came one week before the United Kingdom’s AI Safety Summit last November, which similarly saw the launch of the UK’s own AI Safety Institute, reported GovInsider previously.

However, she said, it is “difficult to say what is adequate testing before an AI model is ready to be rolled out. Then how do you keep testing it once it’s in the world?”


Despite the inherent difficulties, Neuberger insists that it is vital to brainstorm AI-driven defences to meet AI threats, and that her team had convened telecom companies, tech companies, and academic researchers to think about how to keep up with bad actors.


Singapore has also announced plans to accelerate AI testing methodologies through its AI Verify Foundation.


Subscribe to the GovInsider Bulletin for the latest public sector and innovation updates.

5G connectivity seen as an enabler


Government officials from the US, national cyber and tech security advisor Anne Neuberger (centre), and India, telecomms chief Dr Neeraj Mittal (right), agree that emerging technology such as 5G promise "the greater good" but require guardrails and AI-powered security measures. Image: © 2024 GSMA / MWC

Another country using AI defences to ward off AI threats is India.


Speaking at the same session, Dr Neeraj Mittal, the Secretary of India’s Department of Telecommunications, said that the promise of advanced mobile technology led to the Indian Government embarking on a recent 18-month rollout of 5G networks in the country.


“We're trying to bring in the use of this technology for the greater good of the country,” he said, while admitting that “the impact on the economy is to be seen because the 5G use cases are not yet visible anywhere.


“But we are focusing on this because we’re trying to see [benefits] by democratising innovation,” said Dr Mittal, who added that the country was approaching 100 million 5G users, with some estimating this could reach 300 million by 2025.


According to Dr Mittal, the rapid rollout was made possible by a quick auction of spectrum – 72GHz in 45 days – and expedited approvals for permits to install infrastructure – from 52 weeks to about two weeks. This is expected to uplift society and help bridge the digital divide in India.

Weeding out fake identities and fraudulent calls


With many spoof calls originating from various parts of the world using Indian phone numbers, the authorities have actively used AI to detect such calls and the people behind them.


“We have taken to AI to figure out a lot of people using fake proof of identity. And we detected half a million people who have taken SIM cards, up to 100 at a time, using the same identity,” he said.


“So I think one role of AI is in improving security where humans cannot do that work. The second of course is in improving efficiency of operations,” Dr Mittal said, adding that AI could help optimise energy usage and reduce the carbon footprint in the telecom industry.


“Can we deploy AI on the edge to make the user spectrum more efficient? Can it be shared? I think the role of the AI really will unfold as we go along. And I think these critical [sectors] will find AI very useful from the government perspective, from the regulatory perspective and from side of citizens.”