Responsible AI starts with explainable AI – SoftServe
By SoftServe
AI system developers need to incorporate responsible principles in the early phases of development to explain, validate and improve AI outcomes, says SoftServe’s Andrew Tan.
Ethical principles need to be incorporated across the AI product life cycle, says SoftServe's Andrew Tan. Image: Canva.
As the public sector expands the use of artificial intelligence (AI), it is becoming ever more important for governments to tackle the black box problem of AI.
The lack of transparency into AI’s decision-making processes can lead to diminishing citizen trust of using AI in government processes.
Earlier this year, Al Jazeera reported a case of an Indian state government having adopted AI for its welfare disbursement, and wrongfully excluding thousands of legitimate citizens from food subsidies.
There is an increasing need today to ensure that AI-driven outcomes can be explained, says software company SoftServe’s Enterprise Solution Lead, Andrew Tan.
Tan says that organisations should take a lifecycle approach when it comes to developing responsible AI systems that are explainable.
This means incorporating ethical principles across the AI product life cycle – from product development, engineering to implementation.
From the early phases of product development, a human-in-the-loop principle can help to validate and improve outcomes as the AI system moves to the later stages.
Developing tools that promote responsible AI
Aside from adhering to these principles, SoftServe’s R&D team is working with partners to develop and test tools that can explain AI or train users to do the right thing when it comes to AI disasters.
The firm is currently building extended reality tools to simulate various scenarios that organisations can use to train AI users.
Extended reality is an umbrella term that covers augmented reality, virtual reality and mixed reality.
Responsible AI is explainable AI. Hence, SoftServe had to ensure that the simulations themselves were transparent and could be explained to users, he says.
This is so that the simulations correspond to actual scenarios where the principles have been violated, and users must respond accordingly.
Tan points to the importance of building tools and capabilities around explaining AI outputs: “It’s important for organisations to ensure that there are enough resources set aside during any project to incorporate and implement responsible AI.”
For example, this includes building the necessary monitoring, audit, as well as reporting mechanisms.
“The proliferation of AI means that responsible AI will continue to be an important topic and get more attention.”
Defining a responsible AI approach
Responsible AI principles are applied and implemented differently across the organisations SoftServe works with, says Tan.
There is a need to actively engage with the stakeholders to define specific approaches with responsible AI for each organisation.
SoftServe typically defines the approach and principles to adopt by taking reference to widely accepted frameworks, such as those developed by Singapore’s tech regulator, Infocomm Media Development Authority (IMDA).
These frameworks include the Model AI Governance Framework, a voluntary approach taken by Singapore to AI regulation; as well as AI Verify, a software testing toolkit that validates AI systems with internationally recognised principles.
“We then track these principles across the project lifecycle, with the implementation approach evolving as needed when new AI models or algorithms are introduced,” Tan adds.
The Singapore government has recently updated the governance framework to cover GenAI developments, and embarked on an international partnership to develop an AI playbook for small states, reported GovInsider.