Advice Without Explanation Is Not Very Intelligent
The weekend edition of my local newspaper writes that the sommelier at restaurants will be replaced with an intelligent algorithm. On the one hand, the article was meant to bejewel the progress we are making in computer science; on the other hand it was to warn the workforce of a jobless future. The renewed interest in AI and related technology is making its way to the general public.
The driving force is major software firms that are selling us intelligence — IBM's Watson, Microsoft's Cortana, and Google's DeepMind. Knowing I have a background in AI, friends, colleagues, and customers have asked me about these intelligent modules:
- Are they a rule engine?
- How intelligent are they?
- Will they replace our rules?
Given the countless variety of wines, infinite number of recipes, and yearly growing reservoir of wines it is practically impossible to create a set of rules to give good wine advice in a general case. This answers the first question. The new AI modules are NOT a simple rule engine.
The new AI technologies use a combination of techniques. There will be a large dataset used to train a (neural) network with the wine advice of recognized sommeliers. There will be heuristics or rules for clear-cut cases or to prepare the dataset. And there may be some learning process to fine-tune the results that set weighting values in the network.
The result is probably reasonable wine advice in most cases. You will think: yes, indeed, good idea to drink a Sauternes with foie gras. But what if the advice does not ring a bell? In fact, you believe the advice is very odd. The bot's advice may be good but you want to understand why.
A sommelier will try to convince you with enthusiasm of his recommendation and may also suggest an alternative based on some of the hints you gave during the selection process.
In case of the sommelier-bot, we don't know if the odd advice is a flaw in the algorithm or a bug in the software. Even if it turns out to be a matching wine for the dish, you would not feel confident in the advice because you don't know the reasoning and there is no proof that the advice was the best or at least reasonable. Bottom line: advice without explanation is not very intelligent.
The intelligent modules that we are discussing are a black box for the end-user. Opening that black box is an option. Methods based on entropy or a co-variations analysis may generate decision trees from trained statistical models, but you need the help of the supplier to open the black box.
That being said it is highly inefficient to replace rule-based systems with intelligent modules for tasks that involve legal obligations, equality of rights, or decisions that need to be explained.
We answered the questions and concluded that intelligent modules are not a rule engine (although they may use rules), are not very intelligent, and will not replace our rules. I hope these observations help you to see the value of intelligent modules and to use them wisely in situations where rules fall short.