• Home
  • Insights
  • Artificial Intelligence: Suffering in the Over-hype Tornado

AI is more important than fire or electricity to humans” – Google CEO Sundar Pichai

Google CEO Sundar Pichai says artificial intelligence (AI) is going to have a bigger impact on the world than the most ubiquitous innovations in history. Pichai stays “AI is one of the most important things humanity is working on. It is more profound than, I dunno, electricity or fire.”

Okay, I may have never heard a more over-hyped statement.  Is AI really more important than:

  • Fire, without which we’d still be running around the savannah being dim sum for the saber-toothed tiger
  • Electricity which powers nearly everything that we use of today
  • Weapons which allowed us to turn the saber-toothed tiger into our dim sum
  • Printing press which accelerated the capture and dissemination of knowledge
  • Combustion engine which mobilized society, thereby changing the very nature of civilization
  • Telephone which shrunk time and distance by allowing communications at any time to anyone anywhere in the world
  • Penicillin which is the reason why most of us aren’t real-life versions of the “Walking Dead”

It is these types of grandiose comments about AI that does AI a huge disservice because it:

  • Sets unrealistic expectations with respect to what the product can do in its current form; it glosses over the critical aspect that AI will improve over time with more data and more experience resulting in improved analytic results. The history of AI is full of false starts and “AI winters” (see “History of artificial intelligence” courtesy of Wikipedia), because any society-changing innovations takes years, and sometimes generations, to get right. To quote Thomas Edison when describing how he invented the light bulb (another society-changing innovation): “I have not failed. I’ve just found 10,000 ways that won’t work.”
  • Causes politicians to over-react. For example,Hillary Clinton believes that America is ‘totally unprepared’ for the impact of AI. To quote Ms. Clinton: “A lot of really smart people, you know, Bill Gates, Elon Musk, Stephen Hawking, a lot of really smart people are sounding an alarm that we’re not hearing. And their alarm is artificial intelligence is not our friend.”
  • Instigates unnecessary consternation and potential social unrest with the general public (which can lead to irrational policy statements, political choices and lots of Fake News distress).

A recent Gallup Poll “Americans Upbeat on Artificial Intelligence, but Still Wary” found that 73% of Americans say AI adoption will result in net job loss. However, about five in six Americans already use a product or service that uses AI (see Figure 1).

Figure 1: Source: New York Times “Most Americans See Artificial Intelligence as a Threat to Jobs (Just Not Theirs)”

 

Why Will AI Be Successful This Time?

Again, AI has a history of false starts and “AI winters,” but I think this time is different for the following reasons:

  • Big Data and inexpensive data storage options are making massive volumes of granular structured and unstructured data (e.g., social media, log files, text, video, IOT) available for analysis and mining.
  • Open source, natively parallel data software ecosystems (e.g., Hadoop, Spark, Hive, HBase) are the perfect, inexpensive data management platforms for these massive volumes of structured and unstructured data.
  • Inexpensive yet powerful compute resources, not to mention the sudden explosion of specialized microprocessors such as GPU and TPU that are ideal for machine learning and deep learning workloads.
  • Rentable cloud options that allow organizations of all sizes to scale up to whatever compute and storage that they need.
  • Powerful open source AI (Machine Learning and Deep Learning) algorithms such as Weka, SparkML, TensorFlow and Caffe that are democratizing the availability of AI capabilities.

These factors are greatly improving AI’s chances of sticking this time. This makes it even more important that we don’t over-hype AI, but instead seek to educate everyone as to what AI really does and how AI can help make the world a better and safer place.

3 Things That AI Does Really Well

AI does 3 things really well. And while AI does these things very well, AI must still rely upon humans in order to know upon what problems or opportunities to focus, and then to validate, assess the risks, and operationalize the AI results.

#1) AI is great at detecting anomalies and patterns buried in vast amounts of structured and unstructured data

AI is great at detecting anomalies or “unusualness” and repeating patterns buried in massive data sets. Deep Learning, one of the sub-categories of AI, is great at identifying “unusual” items, events or structures which do not conform to an expected “normal” item, event or structure including:

  • Cancer detection
  • Crop diseases
  • Spoiled goods
  • Fraudulent transactions
  • Failing components
  • Road maintenance
  • Storm damaged roofs
  • Cybersecurity attacks

Machine Learning, another sub-category of AI, is great at identifying unknown relationships between people, transactions, interactions and devices including:

  • Email spam
  • Fake news
  • Customer clusters
  • Products that sell together
  • Predictive maintenance
  • Obsolete inventory and goods
  • Unplanned hospital readmissions
  • Hospital acquired infections
  • Gang activities

And in order for AI to identify “unusualness,” we must build analytic models to understand and codify what is “normal.” One technique for codifying “normal” is the concept of Analytic Profiles and Digital Twins, which are analytic structures that can be used to define “normal” behaviors amongst humans or machines.

  • Analytic Profiles are structures that standardize the collection, application and re-use of analytic insights around the organization’s key business entities. Key Business Entities are the physical entities (e.g., customers, products, employees, students, patients, parolees, trucks, wind turbines, jet engines) around which organizations seek to uncover or quantify analytic insights. See the blog “How to Avoid “Orphaned Analytics” for more details on Analytic Profiles.
  • Digital Twin is a digital representation of an industrial asset that enables companies to better understand and predict the performance of their machines, find new revenue streams, and change the way their business operates. See the blog “Me, Myself and Digital Twins” for more details on Digital Twins.

For those of you who have seen the movie “Apollo 13,” remember how the engineers on earth worked with a “mirrored” space capsule to solve the operational problems in order to get Apollo 13 back to earth. It was the Digital Twins innovation of mirrored systems that allowed engineers and astronauts to save the astronauts.

Check out the blog “Which machine learning algorithm to choose for my problem?” for more insights into the capabilities of Machine Learning.

#2) AI can prescribe or recommend corrective actions or next best options

Once a predictive model has been built, then AI can recommend corrective actions (in preventive situations) or next best actions (in human decision optimization situations) based upon outcomes gathered from thousands or millions of similar use cases. Some techniques for creating recommendations include:

  • Collaborative filteringis a method of making automatic predictions (filtering) about the interests of a user by collecting preferences from many users (collaborating). The underlying assumption is that if a person A has the same opinion as a person B on an issue, A is more likely to have B’s opinion on a different issue than that of a randomly chosen person. Organizations like Amazon and Netflix make heavy use of collaborative filtering to drive their recommendations. See the blog “Design Thinking: How User Experiences Change User Expectations” for more examples of everyday use of collaborative filtering.
  • Content-based filteringuse keywords to describe the items and a user profile is built to indicate the type of item this user likes. In other words, content-based filtering algorithms try to recommend items that are similar to those that a user liked in the past (or is examining in the present). In particular, various candidate items are compared with items previously rated by the user and the best-matching items are recommended.

See Wikipedia for more on “Recommender Systems.”

#3) AI systems measure effectiveness, learn and repeat to create a continuously learning environment

AI is also very good at continuous learning by measuring the effectiveness of its decisions, learning and then continuously adapting the analytic models based upon the new learnings to make better decisions. My favorite continuous learning AI algorithm is Reinforcement Learning.

Reinforcement Learning takes actions within controlled environments to maximize rewards while minimizing costs by using trial-and-error to map situations to actions. The children’s game of “Hotter or Colder” is a good illustration of reinforcement learning; rather than getting a specific “right or wrong” answer with each data action, you’ll get only a hint of whether you’re heading in the right direction (see Figure 2).

Figure 2:  Source: “The very basics of Reinforcement Learning

Reinforcement learning is used to address two general problems:

  • Prediction: Determine how much reward at what cost can be expected for every combination of possible future states.
  • Control: Through interacting with the environment, find a combination of actions that maximizes reward and allows for optimal control (e.g., steering an autonomous vehicle, winning a game of chess).

See the blog “Transforming from Autonomous to Smart: Reinforcement Learning Basics” for more details on Reinforcement Learning.

Summary

AI is great at 1) detecting anomalies, 2) recommending actions and 3) learning from its decisions.

And the good news is that more data and more efforts to master AI is going to lead to new product and services that will make our life better and safer.

But AI still requires humans to properly and thoroughly define the test objectives and hypothesis, to validate the results for relevance, actionability and regulatory compliance, and then operationalization the entire process.

Bill Schmarzo

CTO, IoT and Analytics

at Hitachi Vantara