Responsible AI can empower democracies, says Anthropic CEO

By Yogesh Hirdaramani

At the recent AWS Public Sector Summit 2024 held in Washington DC, Anthropic’s CEO, Dario Amodei, spoke about how the AI startup has carved a niche for itself as a leading player in AI safety.

Dario Amodei, CEO of AI startup Anthropic, recently shared how the GenAI startup is driving AI safety innovation. Image: Amazon Web Services

In early September, the United States AI Safety Institute signed landmark agreements with Anthropic and OpenAI that stipulates that the two AI players will provide the institute access to major new models prior to their public release, with the goal of supporting AI safety research, testing and evaluation.

 

While OpenAI’s ChatGPT has become synonymous with large language models (LLMs) and generative AI (GenAI), Anthropic has been steadily making strides as one of the key GenAI players when it comes AI safety – with Time Magazine referring to them as the company betting on AI safety as a winning strategy.

 

At Amazon Web Services (AWS) Public Sector Summit 2024, CEO and Co-Founder of Anthropic, Dario Amodei, spoke about the company’s commitment to AI safety, emphasising that it is not “in conflict with having the best model”.

 

Citing the company’s AI model, Claude 3.5, whose strong performance against public benchmarks frequently outdoes top models, he said, “we managed to produce the most capable model but we don’t believe we sacrificed safety and security”.

 

Despite being a startup, Anthropic’s family of Claude LLMs have quietly become one of the biggest competitors to ChatGPT and Google’s Gemini, and is supported by major investments from AWS.

 

To subscribe to the GovInsider bulletin click here.

GenAI-powered services to empower democracy

 

During his fireside chat with AWS’ Vice President for Public Sector, Dave Levy, Amodei made the case that AI-powered services will be key to supporting public services and making democratic systems more effective.

 

“AI needs to empower democracies and allow them to function better and remain competitive on the world stage,” he said. Amodei explained that AI-powered services make democracy more effective – but poor services could undermine democracies.

 

For instance, he said, GenAI can play a key role in “reinventing the provision of services”, such as developing conversational chatbots that make health and voting information more accessible to citizens.

 

More crucially, GenAI could automate “large scale” tasks for civil servants, such as responding to Freedom of Information Act requests and analysing large volumes of data, Amodei noted.

 

He added that public sector clients can choose from a range of models depending on their needs: a public-facing application might leverage Claude 3 Haiku, which is cheaper and can be deployed at scale, while more specialised applications might tap on Claude 3.5 Sonnet instead.

 

“It’s an open question which direction AI will take us. I want it to take us in the direction of reinventing and revitalising democracy and not the negative alternative.”

Securing GenAI innovation

 

Amodei attributes AWS infrastructure for securing the cloud layer of Anthropic models, enabling the startup to work with public sector clients and assure them of the safety, security, and privacy they need.

 

He pointed to AWS’ dedicated clouds that have been tailored for government use as one of example of how AWS has met the needs of the highly regulated sector. These include AWS GovCloud, an isolated sovereign cloud that supports the United States public sector, as well as AWS Dedicated Local Zones, one of which is currently deployed in Singapore.

 

On securing the application layer, Amodei highlights Anthropic’s responsible scaling policy, which he explained, “aims to deal with AI models as they scale and become more powerful.”

 

Anthropic’s research has broadly focused on the performance and dangers of “frontier models” that can cause catastrophic harm with increasing capabilities.

 

“One of the key concepts in our responsible scaling policy is an increased level in security. We've worked harder and harder to make sure these models are secure."

“As far as we’re concerned, we think these models are national security assets,” Amodei said.

 

To subscribe to the GovInsider bulletin click here 

Testing frameworks

 

As part of Anthropic’s strategic focus on safety and security, the startup has carved a niche for itself when it comes to AI testing sciences.

 

The company works closely with the National Institute of Standards and Technology (NIST) on their AI risk management framework and has introduced trust and safety filters which monitor model behaviours and watch out for coordinated attacks.

 

“I was very pleased to see NIST had actually taken the Singapore AI Safety Framework and mapped that against the risk management framework that NIST has,” he said. Singapore’s AI Safety Toolkit is an open-source AI testing toolkit developed by the AI Verify Foundation, GovInsider reported previously.

 

Anthropic has also introduced service cards, which describes the various aspects an AI model has been tested for, such as toxicity and bias.

 

“We’d like to make sure we provide that transparent information to organisations because every organisation has a different problem…. And that means being transparent about how the model has done based on these benchmarks.”

 

In a separate conversation with GovInsider, AWS’ Vice President of Field Technology and Engineering, Dominic Delmolino, shared that a shared responsibility model is emerging in the AI space, and AWS works closely with customers to understand how they can run tests and oversee model outputs.

Constitutional AI approach

 

Steered by an “constitutional AI” approach, Anthropic’s AI models are given “an explicit set of principles” to abide by – which could include a company’s Terms of Service or a broader document, such as the UN Declaration of Human Rights.

 

“We’re starting to think about if we can provide custom constitutions for a particular constituency,” Amodei said.

 

This approach enables their AI platforms to be more explainable and interpretable, he noted, as leaders can refer to these explicit principles when understanding the responses of these models.

 

The company is also researching on how to further interpret the behaviour and responses of GenAI models, he explained.

 

 Recently, Anthropic has launched an API that displays model features in an understandable way and allows users to edit these features accordingly to “more precisely steer the model”.

 

While much of this research is still in its early stages, it informs their current approach to training and monitoring models, Amodei added.