Predictive Analytics in a Business Rules Context
I am going to be speaking at a new event this month — Predictive Analytics World. This will be the first vendor-neutral event focused on the commercial deployment of predictive analytics — all the existing non-vendor events have a significant academic component. Given this I thought I would spend this month’s column discussing predictive analytics a little more and putting it into a business rules context.
One of the curious and complicating things about predictive analytics is the range of definitions given for it. There are those that use it to mean any kind of analytic model that can be executed to score or evaluate a particular transaction. Others will say that anything with a predictive element, whether an executable model or a report, is a predictive analytic. Some will use predictive analytics as a blanket term for various analytic techniques only some of which are actually predictive!
I tend to think of analytic techniques in the context of business rules in one of two ways. First there is data mining for rules — using analytic techniques to derive a set of rules from historical data, typically for segmentation. E.g., look at historical data about customers and use statistical techniques to identify segments of the customer population who behave similarly and then generate definitions of the rules that allow a new customer to be put into one of these segments, generally by considering various values in the customer information and comparing them with those generated by the data mining. Secondly there is the use of analytic techniques to derive a predictive model that extends the information available to rules. In this case the end result is a model — a function or equation — that calculates a score for a customer or other object where the score represents the likelihood that something is true about that customer or object. This score can then be used in rules as though it was a data element.
For instance I might use a particular analytic technique to develop a model that will predict the likelihood that a customer will churn and then have this churn score become an attribute of a customer that can be used whenever I write rules that should be different for high churn risk and low churn risk customers. I use predictive analytics to describe this second case.
There is a fair degree of variation in labels, however, because the techniques and outputs can be the same (regression techniques can be used both in segmentation and in prediction, for instance). Focusing on the purpose typically works better with data mining being used to describe the analysis of data to understand what it is telling you and predictive analytics being used to describe the turning of this analysis into something that can predict a likelihood or propensity for a particular customer or object.
It is important to remember, however, that predictive analytic models are not decisions — they are just predictions. It is the combination of predictive analytic models with business rules that represents a decision. The business rules use the prediction as they would use any other data element. Because the models are predictive, however, the rules are now generating actions that reflect what is likely to be true in the future rather than only what is true at the time the rules execute.
Some examples should help to clarify.
In risk-based decisions, such as what credit line to offer someone or what price to offer for a given type of insurance, a key element is the prediction of likely risk. A typical analysis will develop a predictive scorecard where different attributes of a customer contribute to a risk score — the more risky the customer, the lower the score typically. Data mining will be used to develop a set of segmentation rules that group people by risk into various buckets or segments, and rules will determine the price or offer for people in the segment. Policies about which segments to target and regulations about how different people can be treated result in additional rules. A customer is scored, calculating the risk score, then segmented and finally assigned a treatment based on the rules.
A classic fraud scenario is to use a neural network, a kind of predictive analytic model, to calculate a score representing the likelihood that a particular transaction is outside the pattern or usual activity for the account. Rules are then created to decide how to act for a given transaction. These rules will use the fraud score as well as information about the customer and transaction to determine what to do. For instance, with credit card fraud the options might be to accept the card, accept the card but follow-up with the customer to check, decline the card, ask the merchant to put the customer on the phone, or an immediate call to authorities. What action is taken might depend on how new a customer is (newer customers might be declined more aggressively), how good a customer they are (better customers might justify taking more risk), and what kind of transaction it is (certain kinds of transaction are much higher risk), and so on.
When a customer calls in to cancel a service most companies want to make some effort to retain them. While it is possible to write a set of rules that use base information about the customer and the products/services they have purchased to recommend a retention offer, predictive analytics can improve the effectiveness of these rules. Predictive analytic models can be built to predict how profitable the customer is likely to be in the future and to predict which offers are most likely to be successful. The rules can then select from offers that are more likely to be successful and that have an acceptable cost, given the likely profitability of the customer.
There are obviously many other examples. Some things to remember:
- Models are not decisions, just predictions, so you will need business rules too.
- While some of your business rules or constraints can and should be built into the models, others work better as rules that act on the result of the model.
- Adaptive control — the constant testing of approaches against potentially new and better ones — is important in determining how to use models for most effect.
- Models age — they get less effective over time — so tuning and re-tuning matters.
- Time to deployment matters for the same reason.
# # #
About our Contributor:
February 6-8, 2018
April 17-19, 2018