AI: Is the Hype Over or Is it Just the Solow Paradox?
Can AI be useful for your business when regarded through the prism of pragmatic profitability and economic efficiency?
AI: is the hype over?
The number of startups that boasted having AI at the core of their solutions declines rapidly. Startups, that are looking for funding, always tend to utilize the opportunities that a hype can bring them. Does it mean that AI hype is nearing its end?
Even if the startup stats may seem irrelevant, the growing number of complaints here and there that AI didn’t bring companies what they were hoping to get is apparently getting louder.
Indeed, the fast and effective solutions that many businesses hoped to get with the help of AI don’t seem to be widespread. Instead, you can read a lot about "plans of implementation", "work in progress" and "we’ve seen some improvement" kind of reports. Where is the long-awaited breakthrough?
It looks like all digital technologies — such as artificial intelligence, machine learning, and Big data — that many hoped for, failed to deliver significant and measurable positive effect. For example, in a recent McKinsey survey of global corporations, only a small fraction of activities and offerings were described as digitized; less than a third of core operations were automated or digitized, and less than a third of products and services were digitized. This is due to adoption barriers and lag effects, as well as transition costs. For example, in the same survey, companies with digital transformations underway said that 17 percent of their market share from core products or services was cannibalized by their own digital products or services.
From hopes to disappointment
Everything started so promising, when AI in its current reincarnation as Deep Learning (or Deep Artificial Neural Networks) showed some impressive results, by, for example, boosting image recognition accuracy from 70% to over 90% in 2011-2012 with Convolutional Neural Networks. Five years later Recurrent Neural Nets (and their modification LSTM, GRU, TransformerNets, etc) made significant progress in speech processing and automated translation. Everyone expected the same type of improvement to continue everywhere but something went wrong. Now it looks like many feel a bit disappointed by what they achieved with AI in their businesses.
First, many businesses learned (with some degree of surprise) that probably the most important ingredient of an ML/DL-enabled system is data. And it turned out that data collection, data preparation and curation are hard, expensive and slow. Machine learning (and especially Deep Learning) is very data-hungry, it needs hundreds of thousands (at least!) pieces of information just to have a chance to build a working model.
But having data does not mean success. The quality of data may undermine your further effort: the data you collected may turn out to be highly skewed (have imbalanced class distribution) or lack some important features. And even if you've got a good dataset, it does not guarantee that you will have a good enough model, because of a number of reasons: wrong learning algorithm, wrong settings of the algorithm, too much data (overfitting), not enough data (underfitting). Perfection always starts with mistakes.
The problem is worsened by the fact that nowadays Deep Learning is Art rather than Science. New algorithms pop up so often that it is not possible to track them all, however, they don’t seem to stem from incremental development within some scientific paradigm, but rather seem to be a result of random guessing and tuning different parameters that eventually brought a better result. One of the core problems of Deep Learning is its lack of transparency: it’s a black box that somehow works and no one knows why.
Another problem is the reproducibility crises in academic research in Computer Science. It’s important for any published research to be reproducible and until recent times, CS was in good position with it. But the hype and ‘publish or perish’ attitude brought the CS into the situation where a lot of findings may look more like a guess rather than a result of a scientific research.
Why does it matter for practical business applications? Academic research is often a starting point in developing applied solutions for businesses. However, any attempt to follow findings of unverified research (and reproducibility is the most important way of verification) ends up in a high risk of wasting time, resources and effort. And kills ROI.
The Solow Paradox
So is it the right time to forget about the AI ‘miracle’ and get down to more practical and tested solutions?
No, it’s not for the first time when promising technology does not bring immediate effect visible by everyone.
Economist Robert Solow said in 1987 that the computer age was everywhere except for the productivity statistics. This phenomenon, which became known as the Solow Paradox, was resolved in the 1990s when a few sectors—technology, retail, and wholesale—led to an acceleration of US productivity growth.
Adding things up for the economy as a whole, McKinsey Global Institute latest research identified potential productivity growth of at least 2 percent per year over the next decade, with about 60 percent coming through digitization. That’s below the roughly 2.5 percent annual rate achieved by the United States in the late 1990s and early 2000s, but well ahead of the 0.5 percent annual average in recent years.
If productivity growth rates do quadruple, it will be because business innovation has caught up with the opportunities created by digital technologies, including AI. The size of the prize means productivity rates won’t stay low forever. And as they rise, so will a new generation of leading companies.
So now is the time for looking out for new efficient and productive ways of turning digital technology into a productivity driver for a company. The prize will be shared by those who are able to find the most effective application of AI to the business needs.
How to turn AI into a productivity booster?
There are several steps, that every business needs to make, if they are aspiring to benefit from the AI-related technology stack.
First, get rid of false expectations of a miracle that AI delivers to everyone who pumps money into it. Decision-makers should stop believing that hiring an AI engineer or a data scientist and giving them an expensive toy to play with will turn any business into a star ship.
Second, make an audit of your data: what you already have; what you don’t have, but can obtain; what is impossible to get.
Third, analyze the data: does it have repetitive patterns that a machine can capture? Is capturing these features makes any sense to your business?
Fourth, understand the limits of AI and define its applicability to the various tasks and processes of the business.
At ImmuniWeb we believe that AI is not a universal solution to all our needs and problems, but a unique tool that can augment skills and aptitudes of our security experts.
Gartner's recent Hype Cycle For AI 2019 defined a new innovation trigger Augmented Intelligence, that matches what we are doing at ImmuniWeb:
Augmented intelligence is a human-centered partnership model of people and artificial intelligence (AI) working together to enhance cognitive performance, including learning, decision making, and new experiences.
ImmuniWeb AI platform combines a plethora of various ML-enabled solutions with human expertise, from risk scoring in Attack Surface Management to intelligent automation of Application Security Testing.