A Blog by Jonathan Low

 

Mar 24, 2025

Why Most Companies Should Not Have An AI Strategy

Here goes corporate reflex again: new technology, new C-suite exec. The problem, as it was for telephones, automobiles, computers and cell phones, is that AI is not an island. It is going to be most effectively implemented in coordination with other technologies, workflows and skill sets. 

To have an "AI strategy" before it is even clear how AI is going to add the most value - and before most organizations know that the optimal uses and outcomes may be, let alone how they are going to collect the appropriate data to generate effective models, is likely to lead to distraction and misdirection. Better, as this article suggests, to experiment, letting innovation and experience bubble up from the staff on the front line, to provide a guide for the way forward, rather than relying on an exec who goes to conferences and seminars try to jam others' possibly irrelevant ideas down. JL

Joe Peppard reports in the Wall Street Journal:

Since the launch of ChatGPT, there has been a common refrain among organizations of all sizes: We need an AI strategy!  Nobody wants to be left behind, so, many are rushing ahead, creating AI “centers of excellence”, and naming chief AI officers. (But) most are making a mistake (because) most of them aren't ready. Even if an AI strategy were to land in a CEO’s inbox, the organization won’t be able to implement it because most companies haven’t been collecting the data they need to train AI models. Rather than a separate AI strategy, organizations need to consider (it), in combination with other technologies and integrated into workflows. The best ideas are most likely to come from the bottom-up, in their day-to-day work supporting customers. Isolating AI on its own will distort decision-making.

For the past two years, ever since the launch of ChatGPT, there has been a common refrain among companies and organizations of all sizes: We need an AI strategy!

The frenzy is understandable. Nobody wants to be left behind, and miss the Next Big Thing. So, many companies are rushing ahead—some even creating AI “centers of excellence” to centralize AI expertise and resources, and naming chief AI officers to the C-suite.

Sorry to say this, but most of them are making a mistake. My takeaway from my work with organizations as they grapple with artificial intelligence is that not only do most companies not need an AI strategy, but they shouldn’t have one at all. Going down that road will be, at best, a distraction.

That might seem radical, even nonsensical. Unless companies have an AI strategy now, don’t they risk falling behind the competition? My answer is no, for a variety of reasons.

Most companies just aren’t ready

Even if a well-crafted AI strategy were to somehow magically land in a CEO’s inbox, the organization likely won’t be able to implement it. Most just haven’t done the required foundational work.

Much of it comes down to data. Poor data quality—incomplete, biased or unstructured—affects AI performance in the same way it can have an impact on any other technology. If you don’t have good data, you can have great strategic intent, but you won’t be able to execute it. The strategy will simply divert attention from what the company really needs to do.

Suppose a manufacturer wants to use AI to undertake predictive maintenance to reduce downtime on a production line. To do this, it will need to predict when faults are likely to occur with machines on the line based on monitoring real-time conditions, and then determine appropriate actions.

To do this, it will need historical data to identify the variety of different faults that can be encountered, and the “signatures”—specific patterns, trends or anomalies in data—that identify early warning signs of failure. AI technology, probably in the form of machine learning, will be used to build an algorithm for this purpose.

That all sounds great, except most companies haven’t been collecting the data they need to train AI models. That can take considerable time, sometimes even years, to get the amount of quality data that they need to be confident that the algorithms they create will be accurate.

AI isn’t an island

AI is a technology, just like blockchain, the Internet of Things, the metaverse and so on. Why would an organization have a separate AI strategy?

In fact, AI isn’t one particular technology, but rather an overarching umbrella for technologies that exhibit what might be considered humanlike intelligence. These include machine learning, computer vision, robotics, image processing and, of course, generative AI in the form of large language models.

If you look at how organizations are deploying AI now for significant business value, it is usually in combination with other technologies and integrated into workflows.

Consider the earlier example of the manufacturing company looking to use AI to predict machinery-maintenance needs. Assume that the company actually has the data it needs to create the algorithm. Once this has been done, the solution will include sensors to gather data in real-time from machines, Bluetooth connectivity to transmit this data, and cloud computing to store this data and host the data platform—where the algorithm will monitor machine performance and issue alerts if abnormalities are detected.

What does all this mean? Rather than a separate AI strategy, organizations need a strategy that considers all technologies. Having a strategy focused on AI alone would, again, be a distraction at best.

So, I ask again: What’s so different about AI that it requires a separate strategy? Nothing!

An immature AI workforce

Studies show that most organizations are immature when it comes to AI. By that, I mean that throughout the ranks—from the top executives through the rank and file—there is little knowledge of, and experience with, AI and its capabilities, and a reluctance to embrace data-assisted decision-making. All of this will mean any AI strategy will be misguided and inexecutable.

If you are the leadership team and you aren’t familiar with AI, how are you going to build a strategy for AI? You can’t.

Given where most companies are starting from, the priority shouldn’t be about building a top-down overarching AI strategy. This can come later. It is about encouraging employees to use AI tools, to experiment and try things out, and to pursue ideas organically rather than following “management direction.” It is also important that there are guardrails to ensure that any tool is used properly, responsibly, and in a way that doesn’t put the organization at risk. The best ideas are most likely to come from the bottom-up, by those engaging in their day-to-day work and supporting customers. Technology doesn’t drive change, people do.

Achieving a certain level of digital maturity across an organization can take years, particularly if cultural change is also needed. Thinking that a magical digital strategy will force that maturity is like thinking that putting a suit on a 2-year-old will make him an adult. It won’t.

It will distort decision-making

Striving to create an AI strategy will likely force employees to look at everything through an AI lens. Right now, it seems like AI is seen as the solution, whatever the problem is.

But just because it’s getting all the attention today doesn’t mean that will continue. There will be other technologies that are coming downstream, and focusing too much on AI will crowd out other solutions to other problems a company might have.

This should not be a conversation about how we can use AI, whether to improve the bottom line or in support of a new business model. Rather, it should be a continuing conversation in the organization about how we can as a business leverage digital technologies for operational and strategic purposes. AI is just one of those technologies. The real challenge with AI, as with any technology, is identifying and then delivering business value from any investment made.

This is similar to the case I made in an earlier article, arguing that companies shouldn’t even have a separate information-technology department. If you separate technology—and even worse, a particular kind of technology, like AI—from the business units, you will separate the people responsible for improving the company’s productivity and bottom line from the people responsible for implementing the technology. That disconnect will inevitably result in a failure to use technology in a way that improves a company’s results. If the solution to every problem is AI, then you’re not going to find the best solutions to most problems.

History matters

My research reveals how the set of decisions and options possible at any point in time is limited by decisions made in the past, even though past circumstances may no longer be relevant. History matters.

 

That is particularly true with technology: A decision not to modernize some old systems or to invest in a particular application may have made sense 10 years ago, but may now be limiting available options when it comes to executing AI.

What this means is that leaders might read about a great application of AI and want to do something similar in their organizations. But they won’t be able to because of past decisions. Rectifying this situation will take investment, resources and time. Often, a considerable amount of time.

Companies that cleaned up their data; modernized their technology infrastructure; reduced technical debt; simplified, standardized and automated processes; established data governance; increased digital literacy; and put in place guardrails focusing on responsible use are in a position to leverage what AI offers. But most companies haven’t done that.

In the end, if a company is like most companies, and has been slow to embrace the changes that are imperative in a digital-first world, it isn’t going to leapfrog other companies just because it has an AI strategy. The company is going to have to take baby steps. And no AI strategy—however well-intentioned, however pretty it sounds—is going to make a bit of difference.

0 comments:

Post a Comment