DATE

DOWNLOAD

From Insight to Impact

How to use artificial intelligence to create superior business value

Executive Summary

Artificial intelligence (AI) has been hyped for many years as a technology that will radically transform business. AI enables companies to analyze much larger and more variable unstructured data sets and information – and to encode tacit knowledge from many sources alongside structured data – to enable better insights and intelligence. However, it is fair to say that AI applications have not yet become as commonly used as many had predicted, with a recent Arthur D. Little survey showing that only 16% of AI users believe they are gaining full potential from their use of it. In many organizations, AI applications remain stuck at the pilot stage, or else are limited to specific applications such as customer interaction and intelligence. More widespread adoption of AI for key management decision making is often hindered by the lack of an adequate strategy, as well as concerns regarding loss of control, lack of transparency, and the perceived threat of job losses.

Yet amidst the hype, solid use cases are already emerging that apply AI and machine learning (ML) to provide new data-driven insights that deliver significant additional business impact. Based on in-depth, first-hand experience and primary research across multiple industries, this Report provides practical, realistic insight into how this impact can be achieved.

A crucial starting point is that AI should be used primarily to augment, rather than replace, human decision making. This allows managers to make smarter, more informed decisions faster and with greater confidence, leading to better impacts. Employing AI in this way also overcomes many of the concerns regarding its adoption.

Using this philosophy, business impact can be achieved through three types of application: augmented decision making, augmented innovation, and augmented productivity. This Report provides a series of compelling use cases for each, such as predicting rail network disruptions and clinical trial participation, price prediction of petrochemical products, telecoms network optimization, pharmaceutical regulation management, improving product development in the food industry, and chemical plant energy efficiency improvement.

Successfully building capabilities to augment business impacts through AI requires a holistic strategy and process that goes far beyond simply employing data scientists and beginning a few pilot projects. In the Report, we set out an approach that organizations can adopt, including governance, setting up multidisciplinary teams, developing data frameworks, and implementing the right technology platforms.

From Insight to Impact

 

1. The potential of AI to augment business decision making

Using data for informed business decisions is nothing new. Supply chain managers, plant operators, marketeers, and strategists all rely on data to do their jobs properly. In fact, companies have been making decisions in broadly the same way for the last 500 years, since double-entry bookkeeping was introduced.

Essentially, companies have taken a deterministic approach to making decisions, basing them overwhelmingly on structured numeric data, from within the company, measuring progress against highly deterministic outcomes. While this was successful in the past, this approach alone is no longer enough in a more interconnected, complex world. New ways of decision making are urgently required that avoid oversimplifying the problem, ignoring inherent uncertainties, and therefore arriving at the wrong answer.These must include probabilistic, external, textual, and qualitative unstructured data alongside traditional deterministic, internal, numeric, and quantitative structured sources.

AI and its opportunities

Enter AI. The rise of AI has enabled companies to analyze much larger and more variable unstructured data sets and information and to encode tacit knowledge from many sources alongside structured data.This is driving the necessary shift in the way companies operate.They can now use modeled probabilistic outcomes to help make better decisions to gain competitive advantage in a faster-moving, ecosystem-driven world.

Since the advent of AI, people have both wondered and prophesized about how it would change business intelligence and decision making. A wide range of applications has been touted – from highlighting potential avenues for drug research to predicting when machines will require servicing, all the way to self-driving cars, trucks, and planes. Analyst International Data Corporation (IDC) expects the overall AI market to break the US $500 billion mark by 2024.

Today, it is fair to say that AI applications have not yet become as commonly used as some had predicted. As Arthur D. Little research shows, only 16% of AI users believe they are gaining full potential from their use of it. Gartner argues that advanced AI methods such as knowledge graphs and natural language processing (NLP) have led to “inflated expectations.” We’ve all had frustrating experiences trying to communicate with AI-powered chatbots or virtual assistants.

Focusing on AI plus the human

So, can AI move beyond the hype and deliver promised benefits? It certainly cannot yet do everything that some of its promoters claim, but it can still deliver enormous business value. What is crucial is how AI is used – rather than replacing human decision making it should augment it. The human remains at the center of the process, retaining control while benefiting from superior insight based on faster analysis of a wider range of larger, more varied data sources. This approach overcomes many business concerns about deploying AI, such as fears of a loss of control and lack of transparency or understanding over how decisions are made. We further explore these worries in Chapter 4.

Enhancing human capabilities by using AI and ML amplifies and accelerates the insight process. It enables people to make smarter, more informed decisions faster and with greater confidence, leading to better impacts. Clear AI business cases are emerging, across both industries and functions, where AI insight enables strategic and bottom-line advantages such as forecasting raw material prices, predicting potential risks more accurately, or automatically rerouting traffic within cities to avoid congestion. We highlight a selection of these in Chapter 3.

Augmented decision making also provides a first step on the road to a fuller use of AI in the future. A good analogy is autonomous driving, currently positioned in the “Trough of Disillusionment” within Gartner’s AI hype cycle. We are currently a long way from fully self-driving cars, but the intermediate steps on the journey, such as the ability for vehicles to automatically park themselves, provide immediate useful benefits that support the driver. Augmented decision making also provides AI applications with feedback and validation so they can learn and evolve – the more we use AI, the faster the feedback loop will be, accelerating the pace of change.

Moving beyond the headlines

Many organizations have already begun AI programs. However, to build long-term value, they need to take a holistic view. Simply hiring data scientists across different departments in an uncoordinated manner and hoping for significant advances is unlikely to deliver lasting advantages. Even if pilot projects show promise, they may not be replicable or scalable. If the wrong technology foundations or working practices are put in place, they can be difficult to change as AI use expands. There may be internal concerns about AI that hold back adoption, particularly worries about being replaced by a machine rather than being helped to do a better job. People need to understand the benefits within their roles – how AI can help them get to informed answers more quickly or direct them to look in new places to solve problems.

To drive success, companies must take a step back and consider a combination of the following:

  • Skills (data scientists) – including capturing knowledge and best practice effectively.There are many subareas within AI, so organizations need to focus on the skill sets required for their specific challenges and objectives.
  • Data/information sources – both internal and external, including structured and unstructured data and tacit knowledge. Businesses often focus overly on internal, structured data. Instead, they can achieve breakthrough results by adding external, unstructured, and tacit sources together for the first time.
  • Platform – a technical platform that underpins all projects to enable consistency, scalability, experimentation, and reuse of models. This hugely reduces the effort involved in creating new models, with an expanding library of components that can be used to continually accelerate development.

Aimed at both business and technical leaders, this Report explains how superior insights, powered by AI, drive business results through augmented decision making, innovation, and productivity. Using real-world examples and primary industry research, the Report covers the benefits, risks/challenges of AI, and outlines approaches that can quickly deliver exponential business value, based on both primary research and Arthur D. Little’s client experience. Augmenting human capabilities may be an early step on the road to an AI world, but it is both extremely powerful and a necessary building block for the future. For your business to succeed in the next five years and beyond, you need to think how you can make augmented decision making a central part of your strategy.

Key terms

Here we list three of the most important AI terms, along with their benefits and uses. A full AI glossary is included at the end of the Report.

Natural language processing

NLP is a field of AI that gives machines the ability to process and derive meaning from linguistic inputs (i.e., natural language text and/or speech). Importantly, NLP models are a key element of Web search engines. In addition, among other functionalities, they enhance communication between humans and machines (e.g., by powering chatbots and virtual assistants, such as Siri and Alexa), as well as between humans (e.g., by enabling automatic translation between languages). In the past few years, NLP models have improved dramatically, allowing them to match or even surpass human levels of performance in a variety of natural language understanding tasks. For example, applied to regulatory documents, NLP can highlight the significance of specific potentially contentious passages, providing humans with guidance on where to focus.

NLP allows for exceptionally large collections of text and speech to be processed automatically in real time. Current NLP applications include:

  • Information retrieval – NLP powers search engines, enabling users to find relevant information from within trillions of Web pages.
  • Sentiment analysis – extracting subjective opinion, such as how people feel, from text sources.
  • Machine translation – automatically translating between languages in real time.
  • Text classification – classifying and sorting texts according to predefined categories, such as spam/not spam or the news topic (e.g., sport, politics, business) of a media article.
  • Speech recognition – automatically transcribing audio/ spoken content into text, such as for captioning of video clips.
  • Chatbots/virtual assistants – applications that communicate with users in a Q&A format via natural language in real time.

Knowledge graphs

Knowledge graphs natively capture and store vast amounts of data relationships. They can map large volumes of complex, interconnected data while maintaining high performance and represent this knowledge in ways that are understandable by both humans and machines. For example, by running a knowledge graph against UK’s Companies House data, it is possible to map all relationships between different organizations, whether officially declared or not.

By taking a relationship-centric approach to data, knowledge graphs enable companies to better manage, read, visualize, and analyze data in space and time. Applications include:

  • Viewing product lifecycles – extracting data across product lifecycles to highlight potential issues, such as imminent failures.
  • Network optimization – particularly of complex, interconnected network ecosystems, such as transport or telecommunications networks.
  • Research and development – empowering users to navigate intuitively across different data sources, learning from resources that might otherwise have been overlooked.
  • Pattern recognition – comparing huge volumes of disparate data to spot and flag patterns of interest, such as in pharmaceutical research.
  • Forecasting – predicting future events and enabling what-if forecasting based on analysis of multiple, interconnected data sources.

Probabilistic machine learning

A framework in ML where probability distributions are used to represent all uncertain, unobserved quantities in a model (including structural, parametric, and noise-related) and how they relate to the data.The basic rules of probability theory are then used to infer the unobserved quantities, given the observed data. Probabilistic ML approaches are a good choice when the real-world aspects that must be modeled and understood are intrinsically uncertain (e.g., pharmaceutical clinical trials to evaluate drug candidates) or difficult to directly observe (e.g., in the supply chain domain where complete coverage of good and reliable data across all aspects of the logistics chain is frequently lacking), which often results in incomplete and/or partially described data sets. Probabilistic ML (unlike other ML techniques) can be successfully applied to such problems. These techniques can furthermore be used to incorporate preexisting knowledge or outcomes by adjusting the probability distributions used for the modeling.

2. The business perspective: Turning insight into impact

Whether or not it involves AI, optimal, scalable business intelligence needs to follow a structured process at a strategic level, as shown in Figure 1.

Data/information sources

Clearly, any decision-making process relies on access to the right data sources, whether internal or external. It is vital to get data in order first, before using technology to derive insight from it.There are two major types of knowledge – explicit and tacit:

  1. Explicit knowledge (which further falls into two categories):
    • Structured data – quantitative information organized in a specific format, such as within a spreadsheet or a database. This is the most easily accessible form of data for decision making, and currently the sole focus of most AI use cases.
    • Unstructured data – informal, qualitative information, such as the contents of reports or emails. Unlike databases, this information does not follow a specific format, as it is not designed to be analyzed in the same way, which can mean it is not included in analysis. Using unstructured data gives a deeper context to decision making. For example, analyzing the reports of thousands of experiments will give more detailed insight than simply reviewing whether they met their objectives or not. Unstructured data requires the right tools (e.g., methods from the NLP field) to unlock meaning and value.
IN OUR EXPERIENCE

There is significant untapped potential in analyzing unstructured data and capturing tacit knowledge

Every company has structured data it collects and uses during its daily operations. Getting value from that data is relatively straightforward – as shown by the ADL survey result that 81% of companies consider this type of data as part of their AI programs. However, businesses normally have much greater volumes of unstructured data, which is much harder to analyze and gain value from. However, we have seen huge advances in specialist models in the last three years that can find meaning and sentiment within large volumes of textual data, materially improving predictive models. Capturing tacit knowledge and turning it into accessible data is even harder to achieve. Nearly a quarter of survey respondents do not yet use tacit knowledge.

  1. Tacit knowledge – implicit information not written down, such as the ways that people work and the knowledge/ experience they have within their brains. Being able to access tacit knowledge therefore delivers enormous benefits, especially in use cases such as M&A, capturing information in case employees leave, and in industries with aging workforces who are about to retire (e.g., nuclear). It is impossible to measure directly but can be captured and encoded indirectly by recording or analyzing people’s behavior, such as how and who they collaborate with or their ways of working. It then becomes explicit knowledge that can be analyzed.

Insight

Insight is derived from three, interlocking aspects:

  1. Reasoning – the ability to combine and analyze multiple data sources (can be both internal and external or structured and structured data) concurrently with the goal of achieving a more complete understanding of real-world processes. AI can be used to combine and analyze digital twins (i.e., digital representations of real physical assets, such as, for example, the topology of a telecommunications network) in conjunction with other diverse data sets that, for example, describe human behavior and demographics to find novel business opportunities.
  2. Anticipating – being able to predict and anticipate future events more accurately, more quickly, and earlier in the process, especially when they are intrinsically uncertain or difficult to measure. AI empowers more effective foresight through ML-based probabilistic forecasting to anticipate, for example, weather-related disruptions for mass transport providers or the impact of COVID-19 on pharmaceutical clinical trial operations in the rare disease space.
  3. Interacting – multifaceted aspect that describes the potential of AI to understand and derive insights from human interactions (e.g., through collaborative filtering or sentiment analysis). It includes the ability of AI technologies to accelerate human-to-human interactions through nearreal-time machine translation coupled with text-to-speech synthesis, as well as the ability for humans to interact with AI-enabled technologies directly though conversational AI systems and Q&A systems. An example of this is chatbots connected to AI-generated and expert-curated knowledge graphs to inform customers about any possible topic.
IN OUR EXPERIENCE

Many organizations still rely on traditional insight models, which hinder effective decision making

Organizations understand the importance of the three interlocking aspects within insight, but many still believe they can gain reasoning capabilities through traditional, deterministic models. Just 18% of respondents in the Arthur D. Little survey use AI for reasoning, compared to 38% of companies that use AI for anticipating, and 35% that use it for interacting. Given the complexity and scale of today’s data, not deploying AI leads to an incomplete picture, less informed decision making, and missed opportunities.

Impact

The potential of effective, AI-supported business intelligence to achieve business impact falls into three broad categories:

  1. Decision making – using AI to help make faster, more informed, and effective decisions by providing deeper and up-to-date insight, improving the bottom line, giving competitive advantage, and saving time. This is perhaps the largest category of potential business impact.
  2. Innovation – unlocking new opportunities by using AI to identify and provide solutions to problems faster and producing superior new concepts and innovations through better-informed ideation.This enables newer employees to learn faster, while broadening the horizons for more experienced staff by suggesting new possibilities outside the norm.
  3. Productivity – optimizing efficiency and processes through AI via better, more agile planning and control along with identifying opportunities to increase productivity and reduce cost.
IN OUR EXPERIENCE

The best decision is more important than the perfect decision

AI in business is not about always being able to make the perfect decision. Instead, it is more about making the best decision based on the data available in order to beat the competitors. Autonomous cars are not on our roads yet because the public will demand perfection from the algorithm – any injury caused by a bad decision by a fully autonomous vehicle will be disastrous for that brand. Consequently, keeping a human in the loop helps lower risk and provide reassurance. In many cases, there will always be a need for human judgment – based on experience and wisdom – that can never be codified when it comes to using a data-driven machine to help in making business decisions.

3. How AI can best augment human capabilities

In this Chapter, we illustrate some practical examples of how AI is already being used to create additional business impact through augmenting human capabilities in the above-mentioned three impact categories: decision making, innovation, and productivity. While this Report focuses heavily on decision making, future publications will provide more detail on augmented innovation and augmented productivity

Augmented decision making

Augmented decision making refers to using an ecosystem of capabilities, powered by AI/ML technology, to enable companies to leverage good data (internal and external) and make better data-driven decisions through risk-based probabilistic modeling and forecasting – leading to improved outcomes. Augmented decision making is relevant in sector-agnostic business contexts where human decisions involve multiple data dimensions (e.g., space, time, money, resources) and/or are very time-critical, carry intrinsic uncertainty, or are very costly. As such, it has very wide application. Essentially, bringing together smart humans and smart technology means better decisions and better outcomes (see Figure 2).

IN OUR EXPERIENCE

AI-based solutions need some investment of time and effort in expert validation before and after full rollout

Many organizations focus purely on putting AI models in place, rolling out and publicizing the solution widely within the company, and then expecting optimal answers from the beginning. When this doesn’t happen, AI is sidelined as not being fit for purpose.

This result reveals why it is key to invest the time and effort required to provide expert validation in a safe environment, through expert feedback at scale. Getting models working effectively with fewer erroneous results means that they can then be rolled out more widely where there is likely to be a higher level of trust in the solution. Thus, expert validation is a key ongoing activity and not a one-off at the start of the process. Employees must also understand that the more they provide feedback, the better the results. Implementing an AI solution is never the end. Without investing in the evolution of the model and making use of new data sources, new models, and continuing training, even the best model will start to fail.

AI models develop their effectiveness by being trained with the right data. They evolve from experience and feedback, much like humans. Models therefore need to be tested and validated to ensure that they are optimized. That means development should include a focus on validation (by internal and external experts and by expert systems), using human experience to enhance AI models before they are fully deployed.

The following examples demonstrate what is possible with augmented decision making across industries and horizontal applications.

Probabilistic forecasting

Probabilistic forecasting is concerned with modeling forward-looking risk to support management decisions on resource allocation. AI is especially valuable for modeling phenomena that are intrinsically difficult/uncertain (e.g., weather) and that are not well described through data, as the following examples illustrate:

Forward-looking risk for a mass transit provider.The provider was suffering from serious disruption caused by trees falling onto its tracks, particularly during increasingly common extreme weather conditions. More than 5 million trips were made on average every weekday across its network, which stretches to more than 250 km in total. Over 12,000 trees were situated alongside one of the network’s most critical overground lines, 40% of which posed a potential risk of interfering with the tracks or the overhead line systems.This was a businesscritical problem that could not be tackled through human-only processes.

The solution was a model built on multiple data sources such as external real-time, micro-localized weather forecasting and internal data such as precise tree locations, geometries, and botanical species information alongside terrain models with underlying soil types and geology. Through AI analytics, the model could create an interactive, dynamic dashboard that provided the ability to identify, with an accuracy of 87%, the next tree to interfere with operations.The dashboard enabled the existing team to focus on the right areas to reduce expensive, unpopular disruption through proactive tree management.

Probabilistic forecasting for a pharmaceutical company. Pharmaceutical companies must be able to forecast patient enrolment for clinical trials, particularly when developing drugs for rare diseases where the estimated prevalence of cases can be as low as 20 people in 100,000.Trials are costly and high-risk, with failure to hit enrollment targets delaying progress, adding enormously to expense, and lengthening time to market.

Current deterministic methods of forecasting enrollments often assume static enrollment gradients, which result in simple and crude linear forecasts. For rare diseases, these manual forecasts are unreliable due to the low prevalence of potential patients and are not easy to adapt to changing circumstances as seen during the evolution of the COVID-19 pandemic.

Through a probabilistic forecasting model, ingesting both internal and external data sources, a pharmaceutical company now has access to continuous, probabilistic forecasts of patient enrollment trajectories. These capabilities can be refined by taking in competitor information, geospatial data on expert networks, as well as COVID-19 disease progression forecasts. This enables better tactical and strategic decisions to be made about next steps in the clinical trial.

Context-aware forecasting

Context-aware forecasting refers to cases where there is plentiful historical data but a changing, unpredictable market context, such as forecasting and predicting material prices, as the following example illustrates:

Price prediction for a petrochemical company. Volatility and uncertainty in commodity prices are increasing. However, companies often use simple linear models for forecasting, which means that inflections cannot be anticipated.

Model inputs are often kept static, so changing circumstances cannot be easily considered. These models consume no contextual data and operate solely on historical data. This often leads to failure when the market is affected by new or sudden political or economic changes, such as an unforeseen emergence of trade wars or pandemic outbreaks.

By building a context-aware, continuously learning forecasting approach that consumes both structured and unstructured data (e.g., trade press articles, commodity discussion forums, and social media), a context-aware model was created for a petrochemical company that could more accurately forecast commodity prices, even in very volatile market conditions. This allowed the company to make better decisions on hedging and adjusting production, leading to an estimated upside of over $20 million per annum.

Graph-based ML

Graph-based learning is used to analyze and optimize real-world assets that have a graph-like shape, such as utility grids, supply chains, or a rail/telecoms network.This enables more efficient and effective network optimization and disruption recovery, as illustrated here:

Network optimization for a telecoms provider. The provider wanted to identify opportunities to enable OPEX reduction and upsell new opportunities. By developing an expandable data model and visualization capabilities, it was possible to create targeted algorithms to uncover areas for optimization. Through having a digital representation of the real world (i.e., expressing the network topology as a graph model), it was possible not only to very quickly answer the question of where to find OPEX-reduction opportunities, but also to solve a variety of other use case problems by superimposing other layers (up to 20) of data on the original data model.

Results were available in just a few weeks, leading to early realization of cost savings and new revenue opportunities. The model also allowed scenarios to be simulated easily and efficiently, such as different network build-outs and different competitor scenarios.

Regulatory compliance

In regulatory compliance applications, NLP techniques are used to augment human assimilation and understanding of spoken and written text.This is especially valuable in tightly regulated industries such as pharmaceuticals and nuclear, where ensuring compliance can be extremely time-consuming and costly. Examples include:

Regulatory document analysis in highly regulated, safety-critical industries. For major safety-critical infrastructure projects run by multinational consortia, failure to submit the right documents can halt building work, leading to lengthy delays and millions of dollars of cost overruns. Documents need to be submitted in the regulator’s local language, which is often different than the project’s working language.

By using NLP, Arthur D. Little created a model to read, extract, and analyze regulatory documents in a safety-critical project’s local language, translate the documents into English, map relationships, and highlight any potentially contentious areas. The model was tested and proved in a project containing over 5,000 different tangible and actionable requirements, all of which needed to be met. Not only did this model streamline the regulatory approval process, but it also created a methodology that was fully reusable across other legal contexts and use cases.

Event detection for pharmaceutical regulatory compliance. Pharmaceutical companies have a regulatory duty to investigate any evidence of adverse reactions to drugs, even if this data is contained in unstructured conversations such as spoken customer feedback.

One company had an archive of around 1 million telephone calls that required investigation. By automatically transcribing speech to text and analyzing for specific keywords, it was possible to identify conversations of interest that could then be manually investigated in more depth by human experts, saving months and even years of time.

 Augmented innovation

In the context of augmented innovation, a wide set of AI techniques and approaches can be utilized to enhance and improve innovation processes with the goal of producing newto-the- world concepts and approaches. In engineering contexts, this can be achieved, for example, by combining existing digital twins with AI-based simulation and optimization techniques that allow experimentation in the virtual world that can then be applied to physical processes/machines. In the pharmaceutical domain, generative AI models (i.e., models that are capable of creating new data instances) are increasingly used to create and evaluate new-to-the-world molecular structures in silico and evaluate their potential use as future drug compounds.

Use case: McCormick

McCormick is a global leader in flavoring products, operating across 160 countries and territories. As part of its offering, the company creates new flavorings for customers to incorporate into their products. The aim is to provide a range of options that span both low-innovation, staple products with a high potential for market success with more innovative novelty products that may have a limited sales time frame (e.g., around a specific calendar event) to attract consumers. All need to be created within tight, two-week timescales.

Previously, creating new flavors was a manual process, with product developers combining elements and choosing potential options. Given the multiple elements involved in each flavoring, the range of possible combinations is enormous, making the selection process extremely complex. Essentially, it is impossible to manually explore and prioritize all the potential options to provide a range of options within the available time frame. This holds back innovation and hampers successfully meeting customer needs.

McCormick now uses AI to explore all options/possible combinations available to provide customers with a range of flavors that cover “high” market success and “high” novelty opportunities (see Figure 3).These flavors are automatically analyzed to check that they are in line with practical constraints, such as supply chain challenges, with AI predicting their potential market success and novelty value. The aim is to create a range of real options along the yellow curve in Figure 3, offering flavors with both success and novelty

“McCormick’s use of artificial intelligence highlights our commitment to insight-driven innovation and the application of the most forward-looking technologies to continually enhance our products and bring new flavors to market,” said McCormick Chairman, President and CEO Lawrence Kurzius in a press release announcing the use of the technology.

In this use case, AI augments, rather than replaces, human product developers, providing them with a narrower range of viable options. This assists product developers at all levels:

  • It helps inexperienced staff members learn faster, moving them up the learning curve and enabling them to produce lower novelty products with a higher chance of market success.
  • It nudges experienced staff members to move beyond their comfort zone and consider less traditional options, enabling them to focus on higher novelty products to maximize the chance of market success.

Pairing McCormick’s global expertise, particularly that of its research and product development teams, with leading AI research helped McCormick accelerate the speed of flavor innovation by up to three times and deliver highly effective, consumer-preferred formulas.

IN OUR EXPERIENCE

AI allows smart, experienced people to make even smarter decisions

When it comes to augmenting humans, a key issue is that experienced people can feel threatened.They worry that inexperienced colleagues will match their performance by relying on the tool without the underlying skills, while they don’t see the possibilities AI offers for taking them into new areas. In fact, these tools can boost their knowledge and creativity, strengthening their internal credibility and overall performance. AI essentially allows smart people to make smarter decisions. Taking a phased approach that demonstrates value early will help overcome this challenge, building trust and convincing people to invest their time and efforts in such programs.

Augmented productivity

Augmented productivity refers to using AI to optimize the efficiency of assets and people. This includes using AI techniques such as reinforcement learning to suggest improvements to industrial processes, which will meet predefined objectives like, for example, reducing energy consumption, lowering waste material, improving quality, or cutting machinery downtime.

Use case: Ammonia production

Ammonia is a key raw material for a range of chemical products, from fertilizers to plastics. Created industrially by combining hydrogen and nitrogen under very high pressure, the production process and systems are complex, with multiple variables involved.

The largest cost in the process is the electricity used to power the compressors that cool and compress the gases involved. Given the complexity of the process, optimizing energy use manually is difficult without impacting yield, resulting in cost variations of +/-15% per ton during production.

Consistently reducing energy costs is therefore a key aim for producers. Neural network-based AI modeling for a chemical company allowed analysis of large volumes of process data in order to identify optimal energy consumption through entropy analysis.This model was then validated with fresh, unknown data before being deployed as an optimizer to support decision making within ammonia production.

Using AI for augmented productivity delivered clear benefits in three key areas:

  1. It provided the ability to monitor energy consumption in real time.
  2. Production costs were reduced by 7% thanks to lower electricity spend.
  3. Yield per kWh was maximized through optimized production.+

Mapping AI-based insights across industries

Given the wide applicability of AI, one of the biggest obstacles is choosing where to start and which use cases to prioritize. Companies may ask: What will work best for the business? Where are our competitors investing? What are the likely “must-win battles” based on the experiences of more advanced sectors?

To provide answers to these questions, we used results from our global, cross-industry survey to develop a heat map, highlighting the types of data used most often and the insights companies are benefiting from. The data also reveals major differences in AI proficiency between industries and shows sector-specific focus areas.

Figure 4 shows which types of data are being analyzed most frequently by respondents across industry sectors, and the types of insight being obtained. All respondents are active users of AI – lower scores mean that respondents only apply the data occasionally; higher scores mean they do this more generally or even pervasively. The research revealed three key conclusions:

  1. Healthcare and retail are the most prolific users of AI. This is unsurprising given that both sectors have pioneered cutting-edge applications such as image interpretation and knowledge graphs found in many places elsewhere. And (online) retail has always been at the forefront of advanced analytics and big data, working to convert enormous volumes of consumer intelligence into additional sales, higher margins, and greater innovation.
  2. Industrial sectors like manufacturing, chemicals, and energy critically depend on technological know-how, spread over a large resource base.This explains why these sectors focus much of their AI efforts on interacting (collaboration and learning). Optimizing processes and driving successful innovation in large industrial companies means bringing together complex information and dispersed expertise. AI helps deliver this successfully.
  3. Very few respondents (16%) indicated that they believe they already use AI/ML at its full potential. Even in more “AI-mature” sectors like healthcare, this number is just 25%. This reflects both the rapid pace of development in AI and the early position of many companies in their AI journey

The relative frequency of application of AI to different data/ application combinations across all respondents is shown in Figure 5.

For example, the most common application across all industries is unsupervised exploration of structured data with the aim of improving interaction (indicated in darker blue).

IN OUR EXPERIENCE

There are already proven AI methodologies and applications that today remain largely untapped

The most important learning from this analysis is that there are already proven methodologies to generate many different insights from a wide variety of data sources, most of which remain largely untapped. Indeed, many techniques have been around for decades; it is the power of modern computing and access to vast quantities of data that are the key drivers. This, in turn, means that successful adoption of the right AI/ML applications can be a source of lasting differentiation for companies in any sector. The application of AI/ML for reasoning and anticipating currently is especially underused by companies.

A typical example of this would be companies analyzing enormous volumes of customer data with the aim of identifying and offering better products and services.

It is also striking to see that few respondents (18%) mentioned reasoning as a priority application for AI/ML. We have found that many companies still use more traditional/deterministic analytics to process large data sets, such as spreadsheets. We believe this will change as the need increases to process higher amounts of complex and dynamic data sets.

4. Concerns and risks

AI can be a controversial topic, with many public concerns raised about the data sources used and the potential bias of algorithms when making decisions. Within businesses, employees can be afraid of what it means for their jobs, while executives may struggle to believe that the benefits outweigh the cost and reputational risks. Organizations must therefore understand concerns at legal, ethical, and operational levels and put in place a strategy that overcomes them.

Legal and ethical concerns

Consumers have high expectations when it comes to their data. They expect it to be kept private and secure, and only processed for specific uses that benefit them. Regulations such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act safeguard these rights, with heavy fines if they are broken. Failure to treat customer data satisfactorily also leads to reputational damage and can impact brand, share price, and future sales.

Moreover, there is an ethical dimension to how decisions are made and how data is used to train algorithms, with multiple examples of AI inadvertently discriminating against particular groups because of both incomplete training data and a lack of human oversight. Executives are rightly concerned about their responsibility, particularly with “black box” AI that automatically makes decisions without human involvement. In these cases, it is vital to know who is responsible for decisions made by the machine and how the decisions have been reached.

Operational concerns

AI programs bring two main operational challenges. First, employees may be suspicious of the potential impact on their jobs and worry that they will be replaced or downgraded by an AI algorithm. Given that they often possess much of the data (particularly tacit knowledge) required to train AI, this concern feeds into the second challenge – access to the right data. Often, data is scattered across organizations, is held in silos, and may not be of high quality or in the right format to be used effectively. In turn, this can impact the results received as algorithms are not trained on sufficient, high-quality data to drive results.

IN OUR EXPERIENCE

A strong company-wide data framework and governance strategy is essential to implement AI programs

For AI to succeed, organizations require a strong, companywide data framework and governance strategy that clearly outlines how data is collected, analyzed, and used.This framework ensures that legal requirements are met and that different departments across the business are bought into the process. A common platform ensures that data privacy, customer information, and access control are much easier to manage and enables one common view of decisions, as opposed to multiple opposing views from different data science teams or departments.

With augmented decision-making use cases, there is always a human in the loop – there is no AI black box making decisions autonomously. Employees retain responsibility for decisions made and, if required, can ignore insight provided by AI if it is irrelevant, unethical, or immature. This is important, as AI itself may not initially deliver perfect results. It will improve over time as it ingests more data and becomes better trained. AI will learn from the humans it works with, with their input reinforcing and accelerating its training, increasing the usefulness of its results as it learns.

On an employee level, businesses need to focus on the innovators within the company who have the vision as to how the solution will make them better at their jobs.These are the people who will invest their experience into further validating the model, seeing longer-term benefits. Once you build trust with this community, it will reassure more skeptical employees and, over time, show the benefits of their involvement in AI programs.

5. Approaches that can quickly deliver superior business value

Successfully building capabilities to augment business impacts through AI, and creating a foundation for future AI deployments, requires a holistic strategy and process. It goes far beyond simply employing data scientists and beginning a few pilot projects.

To achieve scale, companies need an approach that encompasses four key components:

  1. All-in-one, multidisciplinary teams.
  2. A platform approach to deliver industrialization of AI.
  3. Understanding and integration of right data sources.
  4. Investment in the right technologies and platform.

All-in-one, multidisciplinary teams

The resources required to develop augmented insight capabilities span sector and technical expertise:

  • Business consultants – can identify opportunities where AI will deliver tangible value.
  • Solution architects – can determine how to integrate AI models into the existing technology landscape, enabling internal data to be provided systematically in the best format and in a timely manner to enable near-real-time decision making.
  • Data scientists – have the ability to develop and refine the AI models and algorithms required for specific applications.
  • Software engineers – can create and integrate the right technologies to build robust, business-focused applications.

These skills can sometimes be spread across an organization, hampering collaboration and slowing progress. Bridging these silos, for example by creating centers of excellence or working with relevant third-party providers, accelerates AI’s impact. Such approaches also build a central repository of experience and capabilities that can then be reused on subsequent projects, shortening time to value. That enables companies to leverage the knowledge and experience of current and previous data scientists and, for example, preserve investments and maximize skills transfer. Possible approaches include:

  • Third-party incubation. For example, using a trusted third party to:
    • Carry out initial pilot projects to demonstrate value to the business without requiring up-front investment in the internal team and platform.
    • Set up the core platform and help recruit high-quality data scientists to create the operating community. Given the rise of AI, many people may claim to have AI skills but lack experience. Using a knowledgeable third party gives insight and support to recruit the right fit – with the specific AI skills you require for your team.
  • Third-party consolidation. If you have already embarked on initial AI projects, employing a third party to consolidate existing disparate investments and pilots enables you to create a common team, operating model, and platform. The advantage of this approach is that external providers can review the current setup dispassionately, focusing on the optimal outcomes and common needs rather than becoming involved in departmental politics.
  • A hub-and-spoke model around a CoE. This approach balances the advantages of a central platform with the diverse needs of local/departmental teams. The CoE utilizes the company-wide platform, models, and data sources, but allows departments to create and deploy applications based on these models that meet their specific local use cases.

A platform approach to deliver industrialization of AI

One-off AI projects may display promising results, but often learnings are not shared, slowing momentum and meaning new projects are forced to begin from nothing. While pilot projects are a useful method of proving the business value of AI, they need to be structured so that all original IP is captured, with project components able to be catalogued and reused in future scenarios.

We recommend businesses adopt a project-product-platform approach:

  • Project. Projects are often an individual use case with disparate information that is not structured and not systematically available. The result is that a project may show value, but it is a one-time run scenario.This success helps gain business buy-in to the solution/technology and often results in the approval of additional investment to enable productization.
  • Product. Once value is seen in the pilot project, then investment should be made to productionize the approach. Often, this does not impact the AI/ML model but strengthens the underlying technology, such as ensuring that all data sources are automatically available in near real time, enabling faster, more informed decision making. These robust applications require minimal support and can be placed in the hands of business users to drive value.
  • Platform. Ensure that subsequent pilots/products are developed on the same platform to enable consistency and reuse of a library of existing models and data sources, and that this is embedded into the organization, underpinning all activities. This phase is more about adoption engineering, organizational change, and digital transformation than AI technology. Essentially it is about making AI systematic and part of core operational procedures, ensuring that end users are making the most of the outputs and investing in validating and enhancing the model for the future.
IN OUR EXPERIENCE

The project-product-platform approach is important to industrialize AI

Following the project-product-platform approach helps deliver buy-in from across the business. By starting with an individual use case and scaling from there, trust and belief in the possibilities of AI increases, releasing further resources to move forward on the AI journey.

 Understanding and integration of right data sources

As detailed in Chapter 2, ensuring access to the most comprehensive range of relevant internal and external data sources delivers better outcomes. Focusing solely on a subset of data means that decision making will not be based on an understanding of the wider context, leading to less focused and potentially inaccurate results.

Augmented approaches therefore need to encompass unstructured as well as structured data, along with tacit knowledge.They should bring in relevant external data feeds, such as weather forecasts, independent economic insights, news reports, or social media, as required. Ensure that you have the right tools in place to both identify these sources and integrate them with your AI models quickly and seamlessly.

Investment in the right technologies and platform

Clearly, AI success requires a strong technology framework that uses the right components to support and deliver repeatable results. This cloud-based core technology platform should provide the capabilities that underpin all activities, from processing to data sources and models (see Figure 6). This framework creates one common source of the truth that all models operate from, ensuring both consistency and that investment only needs to be made once across the organization.

Product-specific elements, such as algorithms, data feeds, and dashboards can then be built on top of these underlying capabilities, while still following set guidelines and formats, such as for access control and user experience. This speeds development and enables reuse of data science models by the entire team, avoiding any overlaps and duplication of effort. It also removes potential usability and support issues as a common experience aids user adoption with minimal training.

6. Getting started

This Report has aimed to provide business and technical leaders with a broad understanding of how AI can augment business impacts to deliver tangible value across their organizations, illustrated with relevant, real-world use cases.

Putting this into practice requires a strategic approach that covers the following broad steps:

  1. Identify priority opportunities. Identify and prioritize potential opportunities where AI-enhanced intelligence could augment existing human capabilities to deliver new business value. Consider use cases across the three impact categories of decision making, innovation, and productivity, and applications across reasoning, anticipating, and interacting. Choose use cases that solve a bounded problem and can be fed large quantities of readily available, relevant, quality data.
  2. Build capabilities, through a mix of internal/external resources. Build capabilities in a holistic way, considering not just data science and AI expertise, but also solution architecture, software engineering, and, importantly, business consulting to ensure a focus on business value creation. Consider both external and internal resources, including developing CoEs to ensure common frameworks, platforms, and approaches.
  3. Launch pilot project(s) to then turn into products and platforms, creating repeatable, scalable AI capabilities. Plan actively for moving beyond the pilot project phase – this is where many organizations get stuck.The project-productplatform approach is a useful model to ensure repeatable success. Always design proofs of concept with an eye to how they would work in the “real” business were they to prove successful.
  4. Drive for digital transformation and wider adoption of AI across the business. AI is clearly just one of many digital technologies.To reap the full benefits, AI needs to be integrated as part of the broader process of digital transformation. Quality, secure, and easily accessible data in high volumes will always be an essential component of a digital ecosystem that will support game-changing AI solutions. A robust, industrialized data management strategy must form the bedrock of any digital transformation that hopes to leverage AI.

Thirty years ago, making use of the Internet felt like a business choice, whereas now it is just as much a part of the business as the people, the product, and the general ledger. Similarly, AI will soon be a fundamental part of every successful organization, and businesses hoping to thrive need to be ready.

Glossary

  • Chatbots – software applications that can be used to have a “conversation” with a user via text or speech recognition. They are designed to convincingly simulate the way a human would behave as a conversational partner.Together with systems like virtual agents they belong to the family of conversational AI methods.
  • Collaborative filtering – a technique used by recommender systems, based on the idea that people often get the best recommendations from someone with tastes similar to themselves (e.g., when Amazon recommends a product based on “people also viewed/bought”).
  • Deep learning – a subfield of machine learning that utilizes deep, multilayered neural network architectures for problem solving. Deep learning models are capable of representation learning and can be used for supervised, semi-supervised, and unsupervised tasks.
  • Graph-based machine learning – branch of machine learning where graphs are the data. Graphs are mathematical structures made from nodes and vertices, used to model pairwise relations between objects. Common examples of graph ML are node classification or link prediction. A classic example is social network analysis where people are nodes and the vertices are the social connections between them.
  • Knowledge graphs – a means with which to natively capture and store vast amounts of data relationships in ways that are understandable by both humans and machines.
  • Natural language processing (NLP) – field of AI that gives machines the ability to process and derive meaning from linguistic inputs (i.e., natural language text and/or speech).
  • Observational learning (through reinforcement learning) – a type of learning that combines observing, retaining, and possibly replicating or imitating the behavior of an agent. Reinforcement learning coupled with deep learning technologies have been shown to deliver promising results when it comes to learning by observation.
  • Probabilistic machine learning – a framework in machine learning where probability distributions are used to represent all the uncertain unobserved quantities in a model (including structural, parametric, and noise-related) and how they relate to the data.
  • Recurrent neural network (RNN) – a class of neural networks that allows previous outputs to be used as inputs while having hidden states.This makes the networks particularly useful for modeling sequence data such as time series or natural language. Long short-term memory (LSTM) and gated recurrent unit (GRU) are other examples of RNNs.
  • Reinforcement learning – considered one of the basic paradigms of machine learning (alongside supervised, semi-supervised, and unsupervised learning). Concerned with how intelligent agents take actions in an environment to maximize a reward, where the programmer defines the reward/penalty structure but does not explicitly define how the agent completes the task.
  • Supervised learning – another basic paradigm of machine learning where labeled data is used either for classification or regression purposes.
  • Self-supervised learning – can be regarded as an intermediate between supervised and unsupervised learning. Some parts of the samples are used as labels for a task that requires a good degree of comprehension to be solved: labels are extracted from the sample and then the model attempts to extract patterns from the data, generating a representation in the process.
  • Temporal convolutional networks (TCN) – a class of neural networks applicable to time series tasks with the help of convolution layers, which are typically used in the field of computer vision.
  • Unsupervised machine learning – another basic paradigm of machine learning that aims at identifying patterns in data sets that do not have any labels. Often used for data exploration prior to model building.
 

DOWNLOAD THE FULL REPORT

From Insight to Impact

How to use artificial intelligence to create superior business value

DATE

Executive Summary

Artificial intelligence (AI) has been hyped for many years as a technology that will radically transform business. AI enables companies to analyze much larger and more variable unstructured data sets and information – and to encode tacit knowledge from many sources alongside structured data – to enable better insights and intelligence. However, it is fair to say that AI applications have not yet become as commonly used as many had predicted, with a recent Arthur D. Little survey showing that only 16% of AI users believe they are gaining full potential from their use of it. In many organizations, AI applications remain stuck at the pilot stage, or else are limited to specific applications such as customer interaction and intelligence. More widespread adoption of AI for key management decision making is often hindered by the lack of an adequate strategy, as well as concerns regarding loss of control, lack of transparency, and the perceived threat of job losses.

Yet amidst the hype, solid use cases are already emerging that apply AI and machine learning (ML) to provide new data-driven insights that deliver significant additional business impact. Based on in-depth, first-hand experience and primary research across multiple industries, this Report provides practical, realistic insight into how this impact can be achieved.

A crucial starting point is that AI should be used primarily to augment, rather than replace, human decision making. This allows managers to make smarter, more informed decisions faster and with greater confidence, leading to better impacts. Employing AI in this way also overcomes many of the concerns regarding its adoption.

Using this philosophy, business impact can be achieved through three types of application: augmented decision making, augmented innovation, and augmented productivity. This Report provides a series of compelling use cases for each, such as predicting rail network disruptions and clinical trial participation, price prediction of petrochemical products, telecoms network optimization, pharmaceutical regulation management, improving product development in the food industry, and chemical plant energy efficiency improvement.

Successfully building capabilities to augment business impacts through AI requires a holistic strategy and process that goes far beyond simply employing data scientists and beginning a few pilot projects. In the Report, we set out an approach that organizations can adopt, including governance, setting up multidisciplinary teams, developing data frameworks, and implementing the right technology platforms.

From Insight to Impact

 

1. The potential of AI to augment business decision making

Using data for informed business decisions is nothing new. Supply chain managers, plant operators, marketeers, and strategists all rely on data to do their jobs properly. In fact, companies have been making decisions in broadly the same way for the last 500 years, since double-entry bookkeeping was introduced.

Essentially, companies have taken a deterministic approach to making decisions, basing them overwhelmingly on structured numeric data, from within the company, measuring progress against highly deterministic outcomes. While this was successful in the past, this approach alone is no longer enough in a more interconnected, complex world. New ways of decision making are urgently required that avoid oversimplifying the problem, ignoring inherent uncertainties, and therefore arriving at the wrong answer.These must include probabilistic, external, textual, and qualitative unstructured data alongside traditional deterministic, internal, numeric, and quantitative structured sources.

AI and its opportunities

Enter AI. The rise of AI has enabled companies to analyze much larger and more variable unstructured data sets and information and to encode tacit knowledge from many sources alongside structured data.This is driving the necessary shift in the way companies operate.They can now use modeled probabilistic outcomes to help make better decisions to gain competitive advantage in a faster-moving, ecosystem-driven world.

Since the advent of AI, people have both wondered and prophesized about how it would change business intelligence and decision making. A wide range of applications has been touted – from highlighting potential avenues for drug research to predicting when machines will require servicing, all the way to self-driving cars, trucks, and planes. Analyst International Data Corporation (IDC) expects the overall AI market to break the US $500 billion mark by 2024.

Today, it is fair to say that AI applications have not yet become as commonly used as some had predicted. As Arthur D. Little research shows, only 16% of AI users believe they are gaining full potential from their use of it. Gartner argues that advanced AI methods such as knowledge graphs and natural language processing (NLP) have led to “inflated expectations.” We’ve all had frustrating experiences trying to communicate with AI-powered chatbots or virtual assistants.

Focusing on AI plus the human

So, can AI move beyond the hype and deliver promised benefits? It certainly cannot yet do everything that some of its promoters claim, but it can still deliver enormous business value. What is crucial is how AI is used – rather than replacing human decision making it should augment it. The human remains at the center of the process, retaining control while benefiting from superior insight based on faster analysis of a wider range of larger, more varied data sources. This approach overcomes many business concerns about deploying AI, such as fears of a loss of control and lack of transparency or understanding over how decisions are made. We further explore these worries in Chapter 4.

Enhancing human capabilities by using AI and ML amplifies and accelerates the insight process. It enables people to make smarter, more informed decisions faster and with greater confidence, leading to better impacts. Clear AI business cases are emerging, across both industries and functions, where AI insight enables strategic and bottom-line advantages such as forecasting raw material prices, predicting potential risks more accurately, or automatically rerouting traffic within cities to avoid congestion. We highlight a selection of these in Chapter 3.

Augmented decision making also provides a first step on the road to a fuller use of AI in the future. A good analogy is autonomous driving, currently positioned in the “Trough of Disillusionment” within Gartner’s AI hype cycle. We are currently a long way from fully self-driving cars, but the intermediate steps on the journey, such as the ability for vehicles to automatically park themselves, provide immediate useful benefits that support the driver. Augmented decision making also provides AI applications with feedback and validation so they can learn and evolve – the more we use AI, the faster the feedback loop will be, accelerating the pace of change.

Moving beyond the headlines

Many organizations have already begun AI programs. However, to build long-term value, they need to take a holistic view. Simply hiring data scientists across different departments in an uncoordinated manner and hoping for significant advances is unlikely to deliver lasting advantages. Even if pilot projects show promise, they may not be replicable or scalable. If the wrong technology foundations or working practices are put in place, they can be difficult to change as AI use expands. There may be internal concerns about AI that hold back adoption, particularly worries about being replaced by a machine rather than being helped to do a better job. People need to understand the benefits within their roles – how AI can help them get to informed answers more quickly or direct them to look in new places to solve problems.

To drive success, companies must take a step back and consider a combination of the following:

  • Skills (data scientists) – including capturing knowledge and best practice effectively.There are many subareas within AI, so organizations need to focus on the skill sets required for their specific challenges and objectives.
  • Data/information sources – both internal and external, including structured and unstructured data and tacit knowledge. Businesses often focus overly on internal, structured data. Instead, they can achieve breakthrough results by adding external, unstructured, and tacit sources together for the first time.
  • Platform – a technical platform that underpins all projects to enable consistency, scalability, experimentation, and reuse of models. This hugely reduces the effort involved in creating new models, with an expanding library of components that can be used to continually accelerate development.

Aimed at both business and technical leaders, this Report explains how superior insights, powered by AI, drive business results through augmented decision making, innovation, and productivity. Using real-world examples and primary industry research, the Report covers the benefits, risks/challenges of AI, and outlines approaches that can quickly deliver exponential business value, based on both primary research and Arthur D. Little’s client experience. Augmenting human capabilities may be an early step on the road to an AI world, but it is both extremely powerful and a necessary building block for the future. For your business to succeed in the next five years and beyond, you need to think how you can make augmented decision making a central part of your strategy.

Key terms

Here we list three of the most important AI terms, along with their benefits and uses. A full AI glossary is included at the end of the Report.

Natural language processing

NLP is a field of AI that gives machines the ability to process and derive meaning from linguistic inputs (i.e., natural language text and/or speech). Importantly, NLP models are a key element of Web search engines. In addition, among other functionalities, they enhance communication between humans and machines (e.g., by powering chatbots and virtual assistants, such as Siri and Alexa), as well as between humans (e.g., by enabling automatic translation between languages). In the past few years, NLP models have improved dramatically, allowing them to match or even surpass human levels of performance in a variety of natural language understanding tasks. For example, applied to regulatory documents, NLP can highlight the significance of specific potentially contentious passages, providing humans with guidance on where to focus.

NLP allows for exceptionally large collections of text and speech to be processed automatically in real time. Current NLP applications include:

  • Information retrieval – NLP powers search engines, enabling users to find relevant information from within trillions of Web pages.
  • Sentiment analysis – extracting subjective opinion, such as how people feel, from text sources.
  • Machine translation – automatically translating between languages in real time.
  • Text classification – classifying and sorting texts according to predefined categories, such as spam/not spam or the news topic (e.g., sport, politics, business) of a media article.
  • Speech recognition – automatically transcribing audio/ spoken content into text, such as for captioning of video clips.
  • Chatbots/virtual assistants – applications that communicate with users in a Q&A format via natural language in real time.

Knowledge graphs

Knowledge graphs natively capture and store vast amounts of data relationships. They can map large volumes of complex, interconnected data while maintaining high performance and represent this knowledge in ways that are understandable by both humans and machines. For example, by running a knowledge graph against UK’s Companies House data, it is possible to map all relationships between different organizations, whether officially declared or not.

By taking a relationship-centric approach to data, knowledge graphs enable companies to better manage, read, visualize, and analyze data in space and time. Applications include:

  • Viewing product lifecycles – extracting data across product lifecycles to highlight potential issues, such as imminent failures.
  • Network optimization – particularly of complex, interconnected network ecosystems, such as transport or telecommunications networks.
  • Research and development – empowering users to navigate intuitively across different data sources, learning from resources that might otherwise have been overlooked.
  • Pattern recognition – comparing huge volumes of disparate data to spot and flag patterns of interest, such as in pharmaceutical research.
  • Forecasting – predicting future events and enabling what-if forecasting based on analysis of multiple, interconnected data sources.

Probabilistic machine learning

A framework in ML where probability distributions are used to represent all uncertain, unobserved quantities in a model (including structural, parametric, and noise-related) and how they relate to the data.The basic rules of probability theory are then used to infer the unobserved quantities, given the observed data. Probabilistic ML approaches are a good choice when the real-world aspects that must be modeled and understood are intrinsically uncertain (e.g., pharmaceutical clinical trials to evaluate drug candidates) or difficult to directly observe (e.g., in the supply chain domain where complete coverage of good and reliable data across all aspects of the logistics chain is frequently lacking), which often results in incomplete and/or partially described data sets. Probabilistic ML (unlike other ML techniques) can be successfully applied to such problems. These techniques can furthermore be used to incorporate preexisting knowledge or outcomes by adjusting the probability distributions used for the modeling.

2. The business perspective: Turning insight into impact

Whether or not it involves AI, optimal, scalable business intelligence needs to follow a structured process at a strategic level, as shown in Figure 1.

Data/information sources

Clearly, any decision-making process relies on access to the right data sources, whether internal or external. It is vital to get data in order first, before using technology to derive insight from it.There are two major types of knowledge – explicit and tacit:

  1. Explicit knowledge (which further falls into two categories):
    • Structured data – quantitative information organized in a specific format, such as within a spreadsheet or a database. This is the most easily accessible form of data for decision making, and currently the sole focus of most AI use cases.
    • Unstructured data – informal, qualitative information, such as the contents of reports or emails. Unlike databases, this information does not follow a specific format, as it is not designed to be analyzed in the same way, which can mean it is not included in analysis. Using unstructured data gives a deeper context to decision making. For example, analyzing the reports of thousands of experiments will give more detailed insight than simply reviewing whether they met their objectives or not. Unstructured data requires the right tools (e.g., methods from the NLP field) to unlock meaning and value.
IN OUR EXPERIENCE

There is significant untapped potential in analyzing unstructured data and capturing tacit knowledge

Every company has structured data it collects and uses during its daily operations. Getting value from that data is relatively straightforward – as shown by the ADL survey result that 81% of companies consider this type of data as part of their AI programs. However, businesses normally have much greater volumes of unstructured data, which is much harder to analyze and gain value from. However, we have seen huge advances in specialist models in the last three years that can find meaning and sentiment within large volumes of textual data, materially improving predictive models. Capturing tacit knowledge and turning it into accessible data is even harder to achieve. Nearly a quarter of survey respondents do not yet use tacit knowledge.

  1. Tacit knowledge – implicit information not written down, such as the ways that people work and the knowledge/ experience they have within their brains. Being able to access tacit knowledge therefore delivers enormous benefits, especially in use cases such as M&A, capturing information in case employees leave, and in industries with aging workforces who are about to retire (e.g., nuclear). It is impossible to measure directly but can be captured and encoded indirectly by recording or analyzing people’s behavior, such as how and who they collaborate with or their ways of working. It then becomes explicit knowledge that can be analyzed.

Insight

Insight is derived from three, interlocking aspects:

  1. Reasoning – the ability to combine and analyze multiple data sources (can be both internal and external or structured and structured data) concurrently with the goal of achieving a more complete understanding of real-world processes. AI can be used to combine and analyze digital twins (i.e., digital representations of real physical assets, such as, for example, the topology of a telecommunications network) in conjunction with other diverse data sets that, for example, describe human behavior and demographics to find novel business opportunities.
  2. Anticipating – being able to predict and anticipate future events more accurately, more quickly, and earlier in the process, especially when they are intrinsically uncertain or difficult to measure. AI empowers more effective foresight through ML-based probabilistic forecasting to anticipate, for example, weather-related disruptions for mass transport providers or the impact of COVID-19 on pharmaceutical clinical trial operations in the rare disease space.
  3. Interacting – multifaceted aspect that describes the potential of AI to understand and derive insights from human interactions (e.g., through collaborative filtering or sentiment analysis). It includes the ability of AI technologies to accelerate human-to-human interactions through nearreal-time machine translation coupled with text-to-speech synthesis, as well as the ability for humans to interact with AI-enabled technologies directly though conversational AI systems and Q&A systems. An example of this is chatbots connected to AI-generated and expert-curated knowledge graphs to inform customers about any possible topic.
IN OUR EXPERIENCE

Many organizations still rely on traditional insight models, which hinder effective decision making

Organizations understand the importance of the three interlocking aspects within insight, but many still believe they can gain reasoning capabilities through traditional, deterministic models. Just 18% of respondents in the Arthur D. Little survey use AI for reasoning, compared to 38% of companies that use AI for anticipating, and 35% that use it for interacting. Given the complexity and scale of today’s data, not deploying AI leads to an incomplete picture, less informed decision making, and missed opportunities.

Impact

The potential of effective, AI-supported business intelligence to achieve business impact falls into three broad categories:

  1. Decision making – using AI to help make faster, more informed, and effective decisions by providing deeper and up-to-date insight, improving the bottom line, giving competitive advantage, and saving time. This is perhaps the largest category of potential business impact.
  2. Innovation – unlocking new opportunities by using AI to identify and provide solutions to problems faster and producing superior new concepts and innovations through better-informed ideation.This enables newer employees to learn faster, while broadening the horizons for more experienced staff by suggesting new possibilities outside the norm.
  3. Productivity – optimizing efficiency and processes through AI via better, more agile planning and control along with identifying opportunities to increase productivity and reduce cost.
IN OUR EXPERIENCE

The best decision is more important than the perfect decision

AI in business is not about always being able to make the perfect decision. Instead, it is more about making the best decision based on the data available in order to beat the competitors. Autonomous cars are not on our roads yet because the public will demand perfection from the algorithm – any injury caused by a bad decision by a fully autonomous vehicle will be disastrous for that brand. Consequently, keeping a human in the loop helps lower risk and provide reassurance. In many cases, there will always be a need for human judgment – based on experience and wisdom – that can never be codified when it comes to using a data-driven machine to help in making business decisions.

3. How AI can best augment human capabilities

In this Chapter, we illustrate some practical examples of how AI is already being used to create additional business impact through augmenting human capabilities in the above-mentioned three impact categories: decision making, innovation, and productivity. While this Report focuses heavily on decision making, future publications will provide more detail on augmented innovation and augmented productivity

Augmented decision making

Augmented decision making refers to using an ecosystem of capabilities, powered by AI/ML technology, to enable companies to leverage good data (internal and external) and make better data-driven decisions through risk-based probabilistic modeling and forecasting – leading to improved outcomes. Augmented decision making is relevant in sector-agnostic business contexts where human decisions involve multiple data dimensions (e.g., space, time, money, resources) and/or are very time-critical, carry intrinsic uncertainty, or are very costly. As such, it has very wide application. Essentially, bringing together smart humans and smart technology means better decisions and better outcomes (see Figure 2).

IN OUR EXPERIENCE

AI-based solutions need some investment of time and effort in expert validation before and after full rollout

Many organizations focus purely on putting AI models in place, rolling out and publicizing the solution widely within the company, and then expecting optimal answers from the beginning. When this doesn’t happen, AI is sidelined as not being fit for purpose.

This result reveals why it is key to invest the time and effort required to provide expert validation in a safe environment, through expert feedback at scale. Getting models working effectively with fewer erroneous results means that they can then be rolled out more widely where there is likely to be a higher level of trust in the solution. Thus, expert validation is a key ongoing activity and not a one-off at the start of the process. Employees must also understand that the more they provide feedback, the better the results. Implementing an AI solution is never the end. Without investing in the evolution of the model and making use of new data sources, new models, and continuing training, even the best model will start to fail.

AI models develop their effectiveness by being trained with the right data. They evolve from experience and feedback, much like humans. Models therefore need to be tested and validated to ensure that they are optimized. That means development should include a focus on validation (by internal and external experts and by expert systems), using human experience to enhance AI models before they are fully deployed.

The following examples demonstrate what is possible with augmented decision making across industries and horizontal applications.

Probabilistic forecasting

Probabilistic forecasting is concerned with modeling forward-looking risk to support management decisions on resource allocation. AI is especially valuable for modeling phenomena that are intrinsically difficult/uncertain (e.g., weather) and that are not well described through data, as the following examples illustrate:

Forward-looking risk for a mass transit provider.The provider was suffering from serious disruption caused by trees falling onto its tracks, particularly during increasingly common extreme weather conditions. More than 5 million trips were made on average every weekday across its network, which stretches to more than 250 km in total. Over 12,000 trees were situated alongside one of the network’s most critical overground lines, 40% of which posed a potential risk of interfering with the tracks or the overhead line systems.This was a businesscritical problem that could not be tackled through human-only processes.

The solution was a model built on multiple data sources such as external real-time, micro-localized weather forecasting and internal data such as precise tree locations, geometries, and botanical species information alongside terrain models with underlying soil types and geology. Through AI analytics, the model could create an interactive, dynamic dashboard that provided the ability to identify, with an accuracy of 87%, the next tree to interfere with operations.The dashboard enabled the existing team to focus on the right areas to reduce expensive, unpopular disruption through proactive tree management.

Probabilistic forecasting for a pharmaceutical company. Pharmaceutical companies must be able to forecast patient enrolment for clinical trials, particularly when developing drugs for rare diseases where the estimated prevalence of cases can be as low as 20 people in 100,000.Trials are costly and high-risk, with failure to hit enrollment targets delaying progress, adding enormously to expense, and lengthening time to market.

Current deterministic methods of forecasting enrollments often assume static enrollment gradients, which result in simple and crude linear forecasts. For rare diseases, these manual forecasts are unreliable due to the low prevalence of potential patients and are not easy to adapt to changing circumstances as seen during the evolution of the COVID-19 pandemic.

Through a probabilistic forecasting model, ingesting both internal and external data sources, a pharmaceutical company now has access to continuous, probabilistic forecasts of patient enrollment trajectories. These capabilities can be refined by taking in competitor information, geospatial data on expert networks, as well as COVID-19 disease progression forecasts. This enables better tactical and strategic decisions to be made about next steps in the clinical trial.

Context-aware forecasting

Context-aware forecasting refers to cases where there is plentiful historical data but a changing, unpredictable market context, such as forecasting and predicting material prices, as the following example illustrates:

Price prediction for a petrochemical company. Volatility and uncertainty in commodity prices are increasing. However, companies often use simple linear models for forecasting, which means that inflections cannot be anticipated.

Model inputs are often kept static, so changing circumstances cannot be easily considered. These models consume no contextual data and operate solely on historical data. This often leads to failure when the market is affected by new or sudden political or economic changes, such as an unforeseen emergence of trade wars or pandemic outbreaks.

By building a context-aware, continuously learning forecasting approach that consumes both structured and unstructured data (e.g., trade press articles, commodity discussion forums, and social media), a context-aware model was created for a petrochemical company that could more accurately forecast commodity prices, even in very volatile market conditions. This allowed the company to make better decisions on hedging and adjusting production, leading to an estimated upside of over $20 million per annum.

Graph-based ML

Graph-based learning is used to analyze and optimize real-world assets that have a graph-like shape, such as utility grids, supply chains, or a rail/telecoms network.This enables more efficient and effective network optimization and disruption recovery, as illustrated here:

Network optimization for a telecoms provider. The provider wanted to identify opportunities to enable OPEX reduction and upsell new opportunities. By developing an expandable data model and visualization capabilities, it was possible to create targeted algorithms to uncover areas for optimization. Through having a digital representation of the real world (i.e., expressing the network topology as a graph model), it was possible not only to very quickly answer the question of where to find OPEX-reduction opportunities, but also to solve a variety of other use case problems by superimposing other layers (up to 20) of data on the original data model.

Results were available in just a few weeks, leading to early realization of cost savings and new revenue opportunities. The model also allowed scenarios to be simulated easily and efficiently, such as different network build-outs and different competitor scenarios.

Regulatory compliance

In regulatory compliance applications, NLP techniques are used to augment human assimilation and understanding of spoken and written text.This is especially valuable in tightly regulated industries such as pharmaceuticals and nuclear, where ensuring compliance can be extremely time-consuming and costly. Examples include:

Regulatory document analysis in highly regulated, safety-critical industries. For major safety-critical infrastructure projects run by multinational consortia, failure to submit the right documents can halt building work, leading to lengthy delays and millions of dollars of cost overruns. Documents need to be submitted in the regulator’s local language, which is often different than the project’s working language.

By using NLP, Arthur D. Little created a model to read, extract, and analyze regulatory documents in a safety-critical project’s local language, translate the documents into English, map relationships, and highlight any potentially contentious areas. The model was tested and proved in a project containing over 5,000 different tangible and actionable requirements, all of which needed to be met. Not only did this model streamline the regulatory approval process, but it also created a methodology that was fully reusable across other legal contexts and use cases.

Event detection for pharmaceutical regulatory compliance. Pharmaceutical companies have a regulatory duty to investigate any evidence of adverse reactions to drugs, even if this data is contained in unstructured conversations such as spoken customer feedback.

One company had an archive of around 1 million telephone calls that required investigation. By automatically transcribing speech to text and analyzing for specific keywords, it was possible to identify conversations of interest that could then be manually investigated in more depth by human experts, saving months and even years of time.

 Augmented innovation

In the context of augmented innovation, a wide set of AI techniques and approaches can be utilized to enhance and improve innovation processes with the goal of producing newto-the- world concepts and approaches. In engineering contexts, this can be achieved, for example, by combining existing digital twins with AI-based simulation and optimization techniques that allow experimentation in the virtual world that can then be applied to physical processes/machines. In the pharmaceutical domain, generative AI models (i.e., models that are capable of creating new data instances) are increasingly used to create and evaluate new-to-the-world molecular structures in silico and evaluate their potential use as future drug compounds.

Use case: McCormick

McCormick is a global leader in flavoring products, operating across 160 countries and territories. As part of its offering, the company creates new flavorings for customers to incorporate into their products. The aim is to provide a range of options that span both low-innovation, staple products with a high potential for market success with more innovative novelty products that may have a limited sales time frame (e.g., around a specific calendar event) to attract consumers. All need to be created within tight, two-week timescales.

Previously, creating new flavors was a manual process, with product developers combining elements and choosing potential options. Given the multiple elements involved in each flavoring, the range of possible combinations is enormous, making the selection process extremely complex. Essentially, it is impossible to manually explore and prioritize all the potential options to provide a range of options within the available time frame. This holds back innovation and hampers successfully meeting customer needs.

McCormick now uses AI to explore all options/possible combinations available to provide customers with a range of flavors that cover “high” market success and “high” novelty opportunities (see Figure 3).These flavors are automatically analyzed to check that they are in line with practical constraints, such as supply chain challenges, with AI predicting their potential market success and novelty value. The aim is to create a range of real options along the yellow curve in Figure 3, offering flavors with both success and novelty

“McCormick’s use of artificial intelligence highlights our commitment to insight-driven innovation and the application of the most forward-looking technologies to continually enhance our products and bring new flavors to market,” said McCormick Chairman, President and CEO Lawrence Kurzius in a press release announcing the use of the technology.

In this use case, AI augments, rather than replaces, human product developers, providing them with a narrower range of viable options. This assists product developers at all levels:

  • It helps inexperienced staff members learn faster, moving them up the learning curve and enabling them to produce lower novelty products with a higher chance of market success.
  • It nudges experienced staff members to move beyond their comfort zone and consider less traditional options, enabling them to focus on higher novelty products to maximize the chance of market success.

Pairing McCormick’s global expertise, particularly that of its research and product development teams, with leading AI research helped McCormick accelerate the speed of flavor innovation by up to three times and deliver highly effective, consumer-preferred formulas.

IN OUR EXPERIENCE

AI allows smart, experienced people to make even smarter decisions

When it comes to augmenting humans, a key issue is that experienced people can feel threatened.They worry that inexperienced colleagues will match their performance by relying on the tool without the underlying skills, while they don’t see the possibilities AI offers for taking them into new areas. In fact, these tools can boost their knowledge and creativity, strengthening their internal credibility and overall performance. AI essentially allows smart people to make smarter decisions. Taking a phased approach that demonstrates value early will help overcome this challenge, building trust and convincing people to invest their time and efforts in such programs.

Augmented productivity

Augmented productivity refers to using AI to optimize the efficiency of assets and people. This includes using AI techniques such as reinforcement learning to suggest improvements to industrial processes, which will meet predefined objectives like, for example, reducing energy consumption, lowering waste material, improving quality, or cutting machinery downtime.

Use case: Ammonia production

Ammonia is a key raw material for a range of chemical products, from fertilizers to plastics. Created industrially by combining hydrogen and nitrogen under very high pressure, the production process and systems are complex, with multiple variables involved.

The largest cost in the process is the electricity used to power the compressors that cool and compress the gases involved. Given the complexity of the process, optimizing energy use manually is difficult without impacting yield, resulting in cost variations of +/-15% per ton during production.

Consistently reducing energy costs is therefore a key aim for producers. Neural network-based AI modeling for a chemical company allowed analysis of large volumes of process data in order to identify optimal energy consumption through entropy analysis.This model was then validated with fresh, unknown data before being deployed as an optimizer to support decision making within ammonia production.

Using AI for augmented productivity delivered clear benefits in three key areas:

  1. It provided the ability to monitor energy consumption in real time.
  2. Production costs were reduced by 7% thanks to lower electricity spend.
  3. Yield per kWh was maximized through optimized production.+

Mapping AI-based insights across industries

Given the wide applicability of AI, one of the biggest obstacles is choosing where to start and which use cases to prioritize. Companies may ask: What will work best for the business? Where are our competitors investing? What are the likely “must-win battles” based on the experiences of more advanced sectors?

To provide answers to these questions, we used results from our global, cross-industry survey to develop a heat map, highlighting the types of data used most often and the insights companies are benefiting from. The data also reveals major differences in AI proficiency between industries and shows sector-specific focus areas.

Figure 4 shows which types of data are being analyzed most frequently by respondents across industry sectors, and the types of insight being obtained. All respondents are active users of AI – lower scores mean that respondents only apply the data occasionally; higher scores mean they do this more generally or even pervasively. The research revealed three key conclusions:

  1. Healthcare and retail are the most prolific users of AI. This is unsurprising given that both sectors have pioneered cutting-edge applications such as image interpretation and knowledge graphs found in many places elsewhere. And (online) retail has always been at the forefront of advanced analytics and big data, working to convert enormous volumes of consumer intelligence into additional sales, higher margins, and greater innovation.
  2. Industrial sectors like manufacturing, chemicals, and energy critically depend on technological know-how, spread over a large resource base.This explains why these sectors focus much of their AI efforts on interacting (collaboration and learning). Optimizing processes and driving successful innovation in large industrial companies means bringing together complex information and dispersed expertise. AI helps deliver this successfully.
  3. Very few respondents (16%) indicated that they believe they already use AI/ML at its full potential. Even in more “AI-mature” sectors like healthcare, this number is just 25%. This reflects both the rapid pace of development in AI and the early position of many companies in their AI journey

The relative frequency of application of AI to different data/ application combinations across all respondents is shown in Figure 5.

For example, the most common application across all industries is unsupervised exploration of structured data with the aim of improving interaction (indicated in darker blue).

IN OUR EXPERIENCE

There are already proven AI methodologies and applications that today remain largely untapped

The most important learning from this analysis is that there are already proven methodologies to generate many different insights from a wide variety of data sources, most of which remain largely untapped. Indeed, many techniques have been around for decades; it is the power of modern computing and access to vast quantities of data that are the key drivers. This, in turn, means that successful adoption of the right AI/ML applications can be a source of lasting differentiation for companies in any sector. The application of AI/ML for reasoning and anticipating currently is especially underused by companies.

A typical example of this would be companies analyzing enormous volumes of customer data with the aim of identifying and offering better products and services.

It is also striking to see that few respondents (18%) mentioned reasoning as a priority application for AI/ML. We have found that many companies still use more traditional/deterministic analytics to process large data sets, such as spreadsheets. We believe this will change as the need increases to process higher amounts of complex and dynamic data sets.

4. Concerns and risks

AI can be a controversial topic, with many public concerns raised about the data sources used and the potential bias of algorithms when making decisions. Within businesses, employees can be afraid of what it means for their jobs, while executives may struggle to believe that the benefits outweigh the cost and reputational risks. Organizations must therefore understand concerns at legal, ethical, and operational levels and put in place a strategy that overcomes them.

Legal and ethical concerns

Consumers have high expectations when it comes to their data. They expect it to be kept private and secure, and only processed for specific uses that benefit them. Regulations such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act safeguard these rights, with heavy fines if they are broken. Failure to treat customer data satisfactorily also leads to reputational damage and can impact brand, share price, and future sales.

Moreover, there is an ethical dimension to how decisions are made and how data is used to train algorithms, with multiple examples of AI inadvertently discriminating against particular groups because of both incomplete training data and a lack of human oversight. Executives are rightly concerned about their responsibility, particularly with “black box” AI that automatically makes decisions without human involvement. In these cases, it is vital to know who is responsible for decisions made by the machine and how the decisions have been reached.

Operational concerns

AI programs bring two main operational challenges. First, employees may be suspicious of the potential impact on their jobs and worry that they will be replaced or downgraded by an AI algorithm. Given that they often possess much of the data (particularly tacit knowledge) required to train AI, this concern feeds into the second challenge – access to the right data. Often, data is scattered across organizations, is held in silos, and may not be of high quality or in the right format to be used effectively. In turn, this can impact the results received as algorithms are not trained on sufficient, high-quality data to drive results.

IN OUR EXPERIENCE

A strong company-wide data framework and governance strategy is essential to implement AI programs

For AI to succeed, organizations require a strong, companywide data framework and governance strategy that clearly outlines how data is collected, analyzed, and used.This framework ensures that legal requirements are met and that different departments across the business are bought into the process. A common platform ensures that data privacy, customer information, and access control are much easier to manage and enables one common view of decisions, as opposed to multiple opposing views from different data science teams or departments.

With augmented decision-making use cases, there is always a human in the loop – there is no AI black box making decisions autonomously. Employees retain responsibility for decisions made and, if required, can ignore insight provided by AI if it is irrelevant, unethical, or immature. This is important, as AI itself may not initially deliver perfect results. It will improve over time as it ingests more data and becomes better trained. AI will learn from the humans it works with, with their input reinforcing and accelerating its training, increasing the usefulness of its results as it learns.

On an employee level, businesses need to focus on the innovators within the company who have the vision as to how the solution will make them better at their jobs.These are the people who will invest their experience into further validating the model, seeing longer-term benefits. Once you build trust with this community, it will reassure more skeptical employees and, over time, show the benefits of their involvement in AI programs.

5. Approaches that can quickly deliver superior business value

Successfully building capabilities to augment business impacts through AI, and creating a foundation for future AI deployments, requires a holistic strategy and process. It goes far beyond simply employing data scientists and beginning a few pilot projects.

To achieve scale, companies need an approach that encompasses four key components:

  1. All-in-one, multidisciplinary teams.
  2. A platform approach to deliver industrialization of AI.
  3. Understanding and integration of right data sources.
  4. Investment in the right technologies and platform.

All-in-one, multidisciplinary teams

The resources required to develop augmented insight capabilities span sector and technical expertise:

  • Business consultants – can identify opportunities where AI will deliver tangible value.
  • Solution architects – can determine how to integrate AI models into the existing technology landscape, enabling internal data to be provided systematically in the best format and in a timely manner to enable near-real-time decision making.
  • Data scientists – have the ability to develop and refine the AI models and algorithms required for specific applications.
  • Software engineers – can create and integrate the right technologies to build robust, business-focused applications.

These skills can sometimes be spread across an organization, hampering collaboration and slowing progress. Bridging these silos, for example by creating centers of excellence or working with relevant third-party providers, accelerates AI’s impact. Such approaches also build a central repository of experience and capabilities that can then be reused on subsequent projects, shortening time to value. That enables companies to leverage the knowledge and experience of current and previous data scientists and, for example, preserve investments and maximize skills transfer. Possible approaches include:

  • Third-party incubation. For example, using a trusted third party to:
    • Carry out initial pilot projects to demonstrate value to the business without requiring up-front investment in the internal team and platform.
    • Set up the core platform and help recruit high-quality data scientists to create the operating community. Given the rise of AI, many people may claim to have AI skills but lack experience. Using a knowledgeable third party gives insight and support to recruit the right fit – with the specific AI skills you require for your team.
  • Third-party consolidation. If you have already embarked on initial AI projects, employing a third party to consolidate existing disparate investments and pilots enables you to create a common team, operating model, and platform. The advantage of this approach is that external providers can review the current setup dispassionately, focusing on the optimal outcomes and common needs rather than becoming involved in departmental politics.
  • A hub-and-spoke model around a CoE. This approach balances the advantages of a central platform with the diverse needs of local/departmental teams. The CoE utilizes the company-wide platform, models, and data sources, but allows departments to create and deploy applications based on these models that meet their specific local use cases.

A platform approach to deliver industrialization of AI

One-off AI projects may display promising results, but often learnings are not shared, slowing momentum and meaning new projects are forced to begin from nothing. While pilot projects are a useful method of proving the business value of AI, they need to be structured so that all original IP is captured, with project components able to be catalogued and reused in future scenarios.

We recommend businesses adopt a project-product-platform approach:

  • Project. Projects are often an individual use case with disparate information that is not structured and not systematically available. The result is that a project may show value, but it is a one-time run scenario.This success helps gain business buy-in to the solution/technology and often results in the approval of additional investment to enable productization.
  • Product. Once value is seen in the pilot project, then investment should be made to productionize the approach. Often, this does not impact the AI/ML model but strengthens the underlying technology, such as ensuring that all data sources are automatically available in near real time, enabling faster, more informed decision making. These robust applications require minimal support and can be placed in the hands of business users to drive value.
  • Platform. Ensure that subsequent pilots/products are developed on the same platform to enable consistency and reuse of a library of existing models and data sources, and that this is embedded into the organization, underpinning all activities. This phase is more about adoption engineering, organizational change, and digital transformation than AI technology. Essentially it is about making AI systematic and part of core operational procedures, ensuring that end users are making the most of the outputs and investing in validating and enhancing the model for the future.
IN OUR EXPERIENCE

The project-product-platform approach is important to industrialize AI

Following the project-product-platform approach helps deliver buy-in from across the business. By starting with an individual use case and scaling from there, trust and belief in the possibilities of AI increases, releasing further resources to move forward on the AI journey.

 Understanding and integration of right data sources

As detailed in Chapter 2, ensuring access to the most comprehensive range of relevant internal and external data sources delivers better outcomes. Focusing solely on a subset of data means that decision making will not be based on an understanding of the wider context, leading to less focused and potentially inaccurate results.

Augmented approaches therefore need to encompass unstructured as well as structured data, along with tacit knowledge.They should bring in relevant external data feeds, such as weather forecasts, independent economic insights, news reports, or social media, as required. Ensure that you have the right tools in place to both identify these sources and integrate them with your AI models quickly and seamlessly.

Investment in the right technologies and platform

Clearly, AI success requires a strong technology framework that uses the right components to support and deliver repeatable results. This cloud-based core technology platform should provide the capabilities that underpin all activities, from processing to data sources and models (see Figure 6). This framework creates one common source of the truth that all models operate from, ensuring both consistency and that investment only needs to be made once across the organization.

Product-specific elements, such as algorithms, data feeds, and dashboards can then be built on top of these underlying capabilities, while still following set guidelines and formats, such as for access control and user experience. This speeds development and enables reuse of data science models by the entire team, avoiding any overlaps and duplication of effort. It also removes potential usability and support issues as a common experience aids user adoption with minimal training.

6. Getting started

This Report has aimed to provide business and technical leaders with a broad understanding of how AI can augment business impacts to deliver tangible value across their organizations, illustrated with relevant, real-world use cases.

Putting this into practice requires a strategic approach that covers the following broad steps:

  1. Identify priority opportunities. Identify and prioritize potential opportunities where AI-enhanced intelligence could augment existing human capabilities to deliver new business value. Consider use cases across the three impact categories of decision making, innovation, and productivity, and applications across reasoning, anticipating, and interacting. Choose use cases that solve a bounded problem and can be fed large quantities of readily available, relevant, quality data.
  2. Build capabilities, through a mix of internal/external resources. Build capabilities in a holistic way, considering not just data science and AI expertise, but also solution architecture, software engineering, and, importantly, business consulting to ensure a focus on business value creation. Consider both external and internal resources, including developing CoEs to ensure common frameworks, platforms, and approaches.
  3. Launch pilot project(s) to then turn into products and platforms, creating repeatable, scalable AI capabilities. Plan actively for moving beyond the pilot project phase – this is where many organizations get stuck.The project-productplatform approach is a useful model to ensure repeatable success. Always design proofs of concept with an eye to how they would work in the “real” business were they to prove successful.
  4. Drive for digital transformation and wider adoption of AI across the business. AI is clearly just one of many digital technologies.To reap the full benefits, AI needs to be integrated as part of the broader process of digital transformation. Quality, secure, and easily accessible data in high volumes will always be an essential component of a digital ecosystem that will support game-changing AI solutions. A robust, industrialized data management strategy must form the bedrock of any digital transformation that hopes to leverage AI.

Thirty years ago, making use of the Internet felt like a business choice, whereas now it is just as much a part of the business as the people, the product, and the general ledger. Similarly, AI will soon be a fundamental part of every successful organization, and businesses hoping to thrive need to be ready.

Glossary

  • Chatbots – software applications that can be used to have a “conversation” with a user via text or speech recognition. They are designed to convincingly simulate the way a human would behave as a conversational partner.Together with systems like virtual agents they belong to the family of conversational AI methods.
  • Collaborative filtering – a technique used by recommender systems, based on the idea that people often get the best recommendations from someone with tastes similar to themselves (e.g., when Amazon recommends a product based on “people also viewed/bought”).
  • Deep learning – a subfield of machine learning that utilizes deep, multilayered neural network architectures for problem solving. Deep learning models are capable of representation learning and can be used for supervised, semi-supervised, and unsupervised tasks.
  • Graph-based machine learning – branch of machine learning where graphs are the data. Graphs are mathematical structures made from nodes and vertices, used to model pairwise relations between objects. Common examples of graph ML are node classification or link prediction. A classic example is social network analysis where people are nodes and the vertices are the social connections between them.
  • Knowledge graphs – a means with which to natively capture and store vast amounts of data relationships in ways that are understandable by both humans and machines.
  • Natural language processing (NLP) – field of AI that gives machines the ability to process and derive meaning from linguistic inputs (i.e., natural language text and/or speech).
  • Observational learning (through reinforcement learning) – a type of learning that combines observing, retaining, and possibly replicating or imitating the behavior of an agent. Reinforcement learning coupled with deep learning technologies have been shown to deliver promising results when it comes to learning by observation.
  • Probabilistic machine learning – a framework in machine learning where probability distributions are used to represent all the uncertain unobserved quantities in a model (including structural, parametric, and noise-related) and how they relate to the data.
  • Recurrent neural network (RNN) – a class of neural networks that allows previous outputs to be used as inputs while having hidden states.This makes the networks particularly useful for modeling sequence data such as time series or natural language. Long short-term memory (LSTM) and gated recurrent unit (GRU) are other examples of RNNs.
  • Reinforcement learning – considered one of the basic paradigms of machine learning (alongside supervised, semi-supervised, and unsupervised learning). Concerned with how intelligent agents take actions in an environment to maximize a reward, where the programmer defines the reward/penalty structure but does not explicitly define how the agent completes the task.
  • Supervised learning – another basic paradigm of machine learning where labeled data is used either for classification or regression purposes.
  • Self-supervised learning – can be regarded as an intermediate between supervised and unsupervised learning. Some parts of the samples are used as labels for a task that requires a good degree of comprehension to be solved: labels are extracted from the sample and then the model attempts to extract patterns from the data, generating a representation in the process.
  • Temporal convolutional networks (TCN) – a class of neural networks applicable to time series tasks with the help of convolution layers, which are typically used in the field of computer vision.
  • Unsupervised machine learning – another basic paradigm of machine learning that aims at identifying patterns in data sets that do not have any labels. Often used for data exploration prior to model building.
 

DOWNLOAD THE FULL REPORT