For one of our customers, one of the world’s leading snack producers, AI supports elements of recipe creation, which is a historically complicated task given the dozens of possible ingredients and ways to combine them. By partnering product specialists with AI, the organization can generate higher quality recipes faster. The organization’s system has reduced the number of steps required to develop new product recipes from 150 (on average) to just 15. Now, it can delight customers more quickly with new products and new experiences to keep them connected to the Mark.
In particular, AI does not work in isolation, but rather augments skilled teams, providing guidance and feedback to further improve results. This is a hallmark of successful AI solutions: they are ultimately designed for people and a multidisciplinary team that includes technical and domain expertise, as well as a human approach, to enable organizations to the maximum value.
When thinking about how to get the most out of AI, your AI strategy should also consider the right guardrails.
As solutions become more sophisticated, and become more frequently and deeply embedded in software, products, and daily operations, so does their potential to allow people to make mistakes. A common anti-pattern we see is when humans unintentionally over-rely on a fairly stable AI: think of the developer who doesn’t verify AI-generated code, or the Tesla driver who lulls himself into a false sense of security because the autopilot functions of the car.
There needs to be careful governance parameters around the use of AI to avoid this kind of over-reliance and exposure to risk.
While many of your AI experiments can produce interesting ideas to explore, you need to consider the tools that support them. Some AI solutions aren’t built following the kind of sound engineering practices you’d ask for other business software. Think carefully about which ones you would be confident about deploying to production.
It helps test AI models as you would any other application, and don’t let the rush to market cloud your judgment. AI solutions should be supported by the same principles of continuous delivery that underpin good product development, with progress made through incremental changes that can be easily reversed if they don’t have the desired impact.
You’ll find it helps to be honest about what you consider a “desired” outcome; it may not just be financial metrics that define your success. Depending on the context of your organization, productivity and customer experience may also be important considerations. You can look at other leading indicators, such as your team’s awareness of the potential of AI and their level of comfort in exploring, adopting or deploying AI solutions. These factors can give you confidence that your team is on track to improve lagging indicators of customer experience, productivity and revenue. However you approach it, you’re more likely to succeed if you’ve identified these metrics early on.
Finally, for all the threat AI poses to people’s jobs, or even to humanity in general, you’ll do well to remember that it will be your people who will be using the technology. Think about the human side of change, where you strike a balance between encouraging people to adopt and innovate with AI, while remaining sensitive to the problems it can present. For example, you may want to introduce guidelines to protect intellectual property in models that rely on external sources or privacy, where you may use sensitive customer data. We often find it best to give our people a say in where AI augments their work. They know, better than anyone else, where it can have the most impact.
This content was produced by Thoughtworks. It was not written by the MIT Technology Review editorial team.