A Practical Framework for AI Adoption: A Five-Step Process

Is Artificial Intelligence (AI) the new elixir of all modern problems? Or is it a double-edged sword, sometimes destructive and at other times life-saving?

The fact is this: With the right framework, AI has the potential to be harnessed.

According to Gartner, around 37% of organizations are implementing some form of AI. Yet, according to a survey conducted by EY, only about 20% of firms are seeing have strategic AI capabilities. Very few organizations have successfully managed to harness the real power of AI to create meaningful impact.

How to harness AI? What should be the framework? The paper published by McKinsey Global Institute (MGI) recommends five areas that organizations need to focus on.

These areas are not silos. They are interrelated and enmeshed. Each of these areas needs to work together for the impact to be visible.

Being a data strategist has its advantages. In this article, I elaborate on the practical approaches for implementing this framework.

1. Identify the right use-cases

The organization has decided to embark on the AI journey. The first task is identifying the right use-case. The tried and tested method of divergence-convergence works well. Brainstorm to explore as many AI use-cases as possible. Once done, converge to shortlist on top 3 use-cases.

How can the use-cases be converged? What are the dimensions to explore?

I suggest the following dimensions:

  1. Business impact: Does this use-case have a tangible business impact? Quantify it.
  2. Technical feasibility: Does the current technology landscape support the implementation of this use-case? Create a technology map.
  3. Data availability: Are there relevant data points available to deliver the use-case? Explore those.

Mapping the use-cases on these three dimensions provide a use-case map of what is feasible and what is not. An example of this exercise is as follows:

In the above use-case map, use-case #7 and #6 score well on all three dimensions. Use-case #3 is the next candidate, although it lacks all the data that is required.

A question that lingers is: how much data is enough?

There is no clear cut answer to this question. A thumb rule to address this question is to answer the following question:

Are the available data points enough to build a minimum viable model?

If the answer to the above question is “yes,” then the recommendation is to go ahead and consider the use-case for potential development.

2. Create effective data platform

Data is the new oil. This new oil is spilled all around the organization. There is a need to extract value out of it. There is a need to refine it. AI and data have a symbiotic relationship. They need each other for flourishing and thriving.

Organizations have tried to create a data platform for analytics since time immemorial. Enterprise Data Warehouse, Data Marts to trendier Data Lakes have all tried to tame the beast. New data architecture patterns emanate with further development in data technologies.

In 2017, I had written a blog (Demystifying Data Lake Architecture) that highlights critical components for creating a useful data platform for AI. Data technologies have evolved since then. The core, though, remains the same. These concepts can still be applied.

However, a question that needs some thinking is the following:

What are the principles of a data platform to harness AI?

Three guidelines that I suggest are as follows:

  1. Store all data in its raw form: The nature of data is tricky. One never knows how its usage until one uses it. The best strategy is to store all of them in their native format. No transformation. No overheads. Just raw storage. With the advent of cloud technologies, data storage is cheap. A lot of storage tier options are available. For example, in Azure, one can store the first 50TB of data in many tiers (premium, hot, cold, archive) for an average cost of $0.044/GB/month, i.e.,$4.4/TB/month (less than a Starbucks Tall Mocha). As a guideline, I would recommend storing data for at least the past five years. After that, if found useless, it can always be archived.
  2. Decouple storage and compute: Storage is perennial. Processing is ephemeral. Processing engines can be batch or stream-oriented. Processing can also be an expensive operation. Hence it makes sense to have processing on-demand. Based on the type of processing required, create an appropriate processing engine. Once the task is complete, the processing engine can be paused or destroyed. Decoupling compute and storage saves a lot of costs. It also gives a lot of flexibility. That is, generally, a wise thing.
  3. Catalog and curate data: The single most crucial principle that prevents a data lake from becoming a swap is careful cataloging and curation of data. As a thumb rule, anything persisted is cataloged. Active cataloging will enable easy search of data elements by business analysts, data scientists, or anyone who wants to find the right data in the correct format. The importance of active cataloging can’t be emphasized further. Cataloging and curation make or breaks a data analytics platform.

3. Adopt the right tools, processes and technologies

The third part of this equation is to choose the right tools and technologies to enable AI. Of course, there are a plethora of available tools to make it happen. There are three fundamental principles that are critical for AI to flourish.

  1. Leveraging scale: A correlated relationship exists between Data and AI. Generally, more data to train implies a more usable model. In yesteryears, the ability to train model were constrained. Storage and computing capabilities were limited. Over the last two decades, storage and computing technologies have evolved. Cloud computing platforms are innovating. Storage is cheap. Computation is affordable. Data processing and model training at scale is possible at an acceptable cost. The old limitations are now obliterated.
  2. Focusing on functionality rather than technology: Create a Data Architecture that is flexible. Each component satisfies a specific functionality. The available technology features do not pin the component. Functionality is constant whereas technology is ever-changing. That is another benefit of a cloud platform. Cloud platforms innovate. They introduce new technologies that provide the same or better functionality at a cheaper cost.
  3. Embracing agility in data projects: The famous statistician George Box, once quipped “All models are wrong, but some are useful.” Getting to that useful model is an iterative process. Every iteration is a step towards that useful model. Don’t go for absolutes in an AI project. It doesn’t exist. A perfect model is a utopia. Aim for that model, which is good enough for the given context.

4. Integrate AI decisions within processes

The end goal of any AI-based initiative is to create a positive impact. Be it business or be it social. Yet, many successful AI projects perish in its crib. They don’t see the day of the light. Hence it becomes imperative that an AI project, since its incubation, needs to be seen end to end.

I can’t emphasize this enough: AI projects are impact-based projects. They need to have an outcome. They are not technology projects.

Envisioning an AI project should not be about models and algorithms. It has to be about the outcome. An outcome that will yield benefit to the end-user.

Every process is interlocks of steps. The following question needs to be answered:

AI affects how many steps in the process?

  • Does it automate a process? or
  • Does it augment a process?

Depending on the answer, chart the right course.

5. Create a culture of experimentation

Culture is the cornerstone of any change. Peter Drucker once said, “culture eats strategy for breakfast.” Nothing is far from this truth when it comes to adopting AI. For successful AI implementation, inculcating a culture of experimentation is vital. By definition, experiments is a procedure undertaken to prove or disprove a hypothesis. Not all succeed. Yet, all experiments teach. This culture of experimentation needs to permeate into the psyche of the organization. Three principles that can help an organization to create a culture of experimentation.

  1. Have experimentation metrics: Every department in the organizations needs metrics that measure three aspects:
    1. The number of experiments tried for the given timeframe.
    2. The number of experiments adopted into the business workflow in the given timeframe.
    3. The number of experiments in the pipeline in the given timeframe.
  2. Embrace Agile: Agile is the way for AI. Given its nature, an iterative approach works best for AI. Its three core tenets: Kaizen, transparency, and deep collaboration should be should permeate into organizations’ DNA.
  3. Have an AI aware workforce: AI is much hype. It is spoken about by everyone and everywhere. With this hype comes the fear. The fear of replacement. The fear of job-loss. This fear is unfounded. Creating a general awareness in the organization about AI is essential. It is imperative that the workforce is aware of what AI can do and what AI can’t do. With this vital awareness in place, the workforce is more amenable to embrace AI and use it to augment their skills.

Summary

Adopting responsible Artificial Intelligence (AI) is inevitable. All should embrace it. It is not an elixir. But, with the right framework, it has the potential to be impactful.

References

  1. Technology Review: The growing impact of AI on business
  2. ZDNET: Enterprise adoption of AI has grown 270 percent over the past four years
  3. Mckinsey Global Institute: Artificial Intelligence The Next Digital Frontier

About Pradeep Menon

Pradeep is a seasoned Data and AI professional. He has more than 16+ years of experience in the field of Data and AI. He has consulted numerous customers across the globe to create value from their data assets through prudent application technology. Pradeep can balance business and technical aspects of engagement and cross-pollinate complex concepts across many industries and scenarios. He is a distinguished speaker and blogger and has given numerous keynotes on the topics of Cloud technologies, Data and AI.

1 Response

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s