Why Explainable AI Must Be on the CEO's Agenda

Silvie   Spreeuwenberg
Silvie Spreeuwenberg Founder / Director, LibRT Read Author Bio || Read All Articles by Silvie Spreeuwenberg

Many issues have been reported on IT projects that fail after millions of dollars have been spent or that do not return the expected value. Today, we invest in AI. The industry is forecast to grow 12.3% to $156.5 billion this year, according to IDC.[1] However, most CEOs and board directors are not prepared to understand how to control the technology they are investing in because the resulting systems are unexplainable.

It is just a matter of time for these investments to turn into the many IT failures that we have seen in the past. Forbes is warning of the same too. CEOs and board members should see unexplainable AI as a risk. Cindy Gordon sees the development of trusted systems — systems that can't cause any harm to humans and society — as the CEO's primary responsibility to control. She is asking herself, "Is the relatively new field of explainable AI the panacea?"[2]

A good start has been made by the US Department of Commerce. In August 2020, they identified four principles that define explainable AI.[3] They say a system must:

  • provide an explanation: the system delivers accompanying evidence or reason(s) for all outputs
  • be meaningful: the system provides explanations that are understandable to individual users
  • have explanation accuracy: the explanation correctly reflects the system's process for generating the output
  • have knowledge limits: the system only operates under conditions for which it was designed or when the system reaches a sufficient confidence in its output

But it is not convincing enough for the CEO. Since the required audit systems to interface between AI and humans — providing the meaningful explanations — are not readily available, he will need to invest in the creation of these extra features. What arguments could he use to justify that investments without referencing compliance with the DoC principles?

The answer lies in the fact that boards and CEOs that invest in AI systems without investing in explainability take an unacceptable high risk. I present here at least four reasons: unexplainable AI systems are not trusted by employees, are more difficult to improve over time, don't know their limitations (for example, due to COVID-19 disruptions), and will not meet expected government regulations. In other words, unexplainable AI systems have a lower ROI than explainable AI systems.

Explainability should therefore be the top priority for organizations that want to invest in AI. Better to start with a small system that explains itself and can be readily improved than a big AI investment that needs additional investments to explain itself. How?

Existing methods that have been used in AI systems for a very long time (such as decision tables and business rules), combined with the exploratory power of machine learning algorithms, are available techniques to start small, create understandable systems that explain themselves, and stay in control.

The bottom line is that we can't just use AI instead of understanding — we must understand first and then add AI.

Check out my book AIX: Artificial Intelligence needs eXplanation for illustrated examples and more details.

References

[1] "AI market leaders revealed by IDC," infotechlead, Aug. 8, 2020, https://infotechlead.com/artificial-intelligence/ai-market-leaders-revealed-by-idc-62362

[2] Cindy Gordon, "Why Explainable AI Must Be Grounded In Board Director's Risk Management Practices," Forbes, Aug. 31, 2020, https://www.forbes.com/sites/cindygordon/2020/08/31/why-explainable-ai-must-be-grounded-in-board-directors-risk-management-practices/#1eec3b845479

[3] P. Jonathon Phillips, et al, Four Principles of Explainable Artificial Intelligence, Draft NISTIR 8312 — draft publication available free of charge from: https://doi.org/10.6028/NIST.IR.8312-draft

# # #

Standard citation for this article:


citations icon
Silvie Spreeuwenberg , "Why Explainable AI Must Be on the CEO's Agenda" Business Rules Journal Vol. 21, No. 10, (Oct. 2020)
URL: http://www.brcommunity.com/a2020/c049.html

About our Contributor:


Silvie   Spreeuwenberg
Silvie Spreeuwenberg Founder / Director, LibRT

Silvie Spreeuwenberg has a background in artificial intelligence and is the co-founder and director of LibRT. With LibRT, she helps clients draft business rules in the most efficient and effective way possible. Her clients are characterized by a need for agility and excellence in executing their unique business strategy or policy. Silvie's experience has resulted in the development of tools and techniques to increase the quality of business rules. She writes, "We believe that one should focus on quality management of business rules to make full profit of the business rules approach." LibRT is located in the Netherlands; for more information visit www.silviespreeuwenberg.com & www.librt.com

Read All Articles by Silvie Spreeuwenberg

Online Interactive Training Series

In response to a great many requests, Business Rule Solutions now offers at-a-distance learning options. No travel, no backlogs, no hassles. Same great instructors, but with schedules, content and pricing designed to meet the special needs of busy professionals.