Unlock your organisation’s AI value: From proof of concept to real-world impact

By Scott Shaw

The buzz surrounding AI means that it can become a ‘hammer looking for a nail,’ with companies rushing to integrate it into processes when it may not actually be necessary, or the right tool for the job.

AI’s success hinges on companies rewiring their operations and adopting an iterative AI strategy guided by constant experimentation, robust engineering practices and clear guardrails. Image credit: Thoughtworks.

As artificial intelligence (AI) proliferates across industries, the shift from proof of concepts (POC) to scalable solutions becomes paramount. A recent study claims that as many as 90 per cent of AI and generative AI (GenAI) projects are stuck in the POC stage and are not productionised.

 

At Thoughtworks, we are seeing a new kind of urgency taking shape in 2024. Leadership teams are demanding real results from their initial forays into AI. However, this requires organisations to recognise that AI is not a standalone tool and is not as easy as plug-and-play.

 

AI’s success hinges on companies adopting an iterative AI strategy guided by constant experimentation, robust engineering practices and clear guardrails. This approach could require rewiring the way a company operates.

Laying the groundwork before AI implementation

 

Organisations need to put a few fundamental building blocks in place before they can take advantage of the AI breakthroughs that seem to be emerging every day. One is a solid data strategy that ensures a base level of relevant, credible and traceable data is readily available to feed into AI models. Without this foundation, an AI solution may simply enable the business to make misguided decisions faster.

 

It's also critical to employ tools like GenAI with a basic idea of what “good” looks like for the outcome leaders are trying to achieve. While these tools can be directed, they can’t be trusted to work without supervision or to vet the quality of their own results. Having tools and processes in place to continuously monitor and evaluate the output of AI systems is part of a responsible technology practice, and essential to avoiding unintended consequences.

 

Once these parameters are in place, Thoughtworks encourages organisations to start testing AI with possible use cases emerging in their operations. Like all innovations, it can be difficult to understand the full potential or range of applications until the technology is firmly in play.

High-quality labelled data and data access

 

Another common challenge that prevents companies from deploying their models in production is the lack of transparency in complex AI models. This “murkiness” makes it difficult to assess accuracy and suitability for specific needs. Thoughtworks helps companies address this by providing tools and expertise to confidently evaluate Large Language Models (LLMs).

 
Scott Shaw, Thoughtworks' APAC Chief Technology Officer.

By offering accelerators for tasks like text classification and data labelling, Thoughtworks’ pre-built solutions streamline the development process, encouraging companies to move beyond proof-of-concept (POC) stages and achieve faster results with their AI projects.

 

With increased confidence in deploying AI models, companies can address the opaque nature of LLMs, allowing for more informed decisions. Leaders will be able to correctly respond to questions like, “How do I know if the LLM outputs are accurate?” or “Which model/approach is best for my use case?”

 

Aside from data labelling, it is critical that AI POCs reflect and preserve your organisation's privacy and security policies. Incorporating existing access controls into the LLM's behaviour not only strengthens security but can also reduce training costs.

 

For instance, when integrating an LLM with a data platform, the context and model output should take the user’s role and access privileges into account. This ensures users only access data they have permission to see, enhancing overall system security.

Effective prompting

 

Currently, crafting effective prompts relies heavily on trial and error, making it difficult to scale and maintain GenAI solutions. These prompts, which guide the AI's responses, can become ineffective as models evolve. Thoughtworks' solution tackles this by developing tools that optimise prompts for specific models. This not only simplifies production maintenance of GenAI applications but also allows for greater portability between models - ensuring businesses can leverage the most suitable model for their needs without starting from scratch with prompt design.

yuu Rewards Club: a case study in rapid AI scaling

 

yuu Rewards Club, Singapore’s leading coalition loyalty platform, exemplifies how AI can be used to scale up quickly. It integrates top brands across retail, dining, entertainment, banking, and more, offering a hyper-personalised mobile experience and a single currency for maximised rewards.

 

Equipped with advanced AI and ML capabilities and a robust partner ecosystem, the platform revolutionises traditional loyalty programmes, offering consumers new shopping experiences, such as convenient offer redemptions across multiple brands through a single app, and personalised offers and rewards.

 

Developed by minden.ai, a Temasek-founded technology venture, in collaboration with Thoughtworks, the platform skyrocketed to become the number one app on both major app stores within a month and amassed over a million members in just 100 days.

 

This is a great case of how user-centric design, agile development and a focus on scalability can be instrumental in achieving rapid growth with AI-powered platforms.

South Asian Bank’s GenAI chatbot revolutionises customer service

 

Thoughtworks partnered with a leading South Asian bank to tackle a common challenge; scattered data hindering customer experience. Data was siloed across various sources, making it difficult for product managers to efficiently access customer information.

 

Leveraging GenAI, the team analysed the datasets, identified key pain points and built a production-ready GenAI-powered chatbot. Additionally, they created a reusable framework that could be adapted to any fine-tuned language model, ensuring scalability.

 

The GenAI agent proved to be a game-changer. Customer service capabilities were significantly improved, and users enjoyed a more streamlined dialogue experience.

Responsible AI

 

In both instances described above, swift progression from POC to full-scale production was facilitated by strong leadership endorsement. Such organisation-wide buy-ins are augmented by dynamic GenAI strategies that can keep pace with the rapidly evolving marketplace and user needs.

 

Organisations should also establish a responsible AI framework that addresses critical aspects such as privacy, security, and compliance with laws and regulations. As AI and its capabilities evolve, safeguards are essential to ensure its ethical and responsible deployment. For instance, we've developed a comprehensive Responsible Tech Playbook, in collaboration with the United Nations, covering AI alongside sustainability, data privacy and accessibility considerations.

 

For organisations aiming to become adept in leveraging AI, the true measure of success lies not merely in automating routine tasks, but in enhancing human capabilities and magnifying the impact of individual contributions within the organisation.

 

The Author is the APAC Chief Technology Officer for Thoughtworks.


To explore how AI can help improve agility in your business, click here.