The following are the main conclusions of the report:
Enterprises buy AI/ML, but struggle to scale across the organization. The vast majority (93%) of respondents have multiple AI/ML projects experimental or in use, with larger companies likely to have the largest deployment. A majority (82%) say ML investment will increase over the next 18 months and will closely tie AI and ML to revenue goals. However, scale is a significant challenge, as is hiring skilled workers, finding the right use cases, and demonstrating value.

Successful deployment requires a talent and skills strategy. The challenge goes beyond attracting top data scientists. Enterprises need hybrid and translational talent to guide AI/ML design, testing, and governance, and a workforce strategy to ensure all users have a role in technology development. Competitive companies should offer clear opportunities, advancements and impacts for workers that set them apart. For the general workforce, upskilling and engagement are key to supporting AI/ML innovations.
Centers of Excellence (CoE) provide a foundation for broad deployment, balancing technology sharing with tailored solutions. Companies with mature capabilities, typically larger companies, tend to develop systems in-house. A CoE offers a hub model, with core ML consulting across divisions to develop broadly deployable solutions alongside bespoke tools. ML teams should be incentivized to stay abreast of the rapidly evolving AI/ML data science developments.
AI/ML governance requires robust model operations, including data transparency and provenance, regulatory foresight, and responsible AI. The intersection of multiple automated systems can increase risk, such as cybersecurity issues, illegal discrimination, and macro-volatility, to advanced data science tools. Regulators and civil society groups are looking at AI as it affects citizens and governments, with a particular focus on sectors of systemic importance. Businesses need a responsible AI strategy based on comprehensive data provenance, risk assessment, and checks and controls. This requires technical interventions, such as automated signaling of failures or risks in AI/ML models, as well as social, cultural and other business reforms.
Download the report
This content was produced by Insights, the custom content group of MIT Technology Review. It was not written by the MIT Technology Review editorial team.