BRForum 2007 Vendor Panel: Rule Engines ~ Hype vs. Reality
As Presented at the 2007 International Business Rules Forum
Bill Waid, Vice President , Fair Isaac
Art Tortolero, President, Innovations Software Technology
Russell Keziere, Senior Director,Pegasystems
David Straus, Senior Vice President, Corticon
Rik Chomko, Chief Product Officer, InRule Technology
Bruno Trimouille, Director of Product Development, ILOG
Stephen D. Hendrick, Group Vice President, Application Development & Deployment Research, IDC
- Do you see 'referential rule integrity' as a problem?
- Is there still a boundary between the tool's representation of 'rule' versus the business user's?
- Modularization to manage the number and complexity of rules?
- What are the pros and cons of the 'compiled code' approach versus a more 'interpreted' approach?
- Is there a business advantage to 'rules' not being tied to a particular vendor's tools and approach?
- After consumer lending and insurance, where do you see wider use of business rules?
- Will your rule engines beat my hand-coding?
- Traceability — how do you maintain the linkage of the business rules in your tool to their source documents and other business artifacts?
- Concluding Remarks: How are you going to drive success in the business rules market?
Steve Hendrick: We have quite an esteemed set of panelists here. I'll start off with quick introductions of each one, from left to right:
So you see that we have a very good and diverse cast of characters, representing quite a few of the leading vendors in the whole business rules management system space.
The format for today — I'm going to start off with one or maybe two questions for the panelists, just to get them going and to give all of you a chance to listen to some of their perspectives. Then, we're going to very quickly shift over to questions from the audience. So, during this first ten or fifteen minutes (or so) be thinking up questions that you want to ask these guys.
As far as groundrules for the participants, there are only a couple: no physical contact; no outrageous manipulation of the truth; no filibustering. And I will pull rank, as required, if someone drags on too long.
Steve Hendrick: With that, let's get going with some questions. I'm going to start off with one of my favorite issues, which is what I call 'referential rule integrity'. The reason I'm going to start here is that one of the problems in business rules management systems is that, given the ambiguity that creeps into the rule base over time, it's very easy to have an incomplete rule base. The potential then exists to have a business rule engine deliver incoherent results.
This is a problem that hasn't been measured at this point. Most of the rule vendors that I've talked to seem to suggest that it is a problem — it does exist — so it shouldn't be taken lightly. But, at the same time, the vendor community hasn't necessarily been quick to respond in terms of dealing with the problem.
So the first question to start is: Is this 'referential rule integrity' seen as problem? And, if so, does it matter? And if it does matter, what are you doing about it?
I'll give about two minutes each to go through our panel. Bill, why don't we start with you.
Very good. It is actually a problem, and we find that — as you move towards the propagation of business rules into the hands of the business owner and then upstream — it becomes an even bigger problem; you begin to lose control of the development process. There are a couple of things that we are doing about this.
One is that we have created an analysis package, as part of our repository, to detect where incomplete rule definitions are and where conflicting rule definitions are; we pre-detect where you could run into these potential problems. These become part of your life cycle management and the release cycle of your rules. The second key element is (post-execution) detecting where these things actually occurred in production.
In summary, there's monitoring the rules and the gathering of the data and reporting on it, to find out where you could potentially run into these kinds of problems.
We address this in two different ways: logically, we have a methodology that captures referential integrity in a crisp, enterprise architecture, object-based way where you can encapsulate the different concerns so that you can, in effect, divide and conquer, letting you focus on the particular areas where you need to establish your consistency, completeness, and coherence.
As far as the rule validation, itself, we provide test facilities including a total quality management system that graphically simulates and granularly captures your rules’ test data, reference data, and the actual result data. You are able to run regression tests outside the tool as a way of verifying the rules. Also, we provide version control utilities that allow you to see different versions of rules and see what their differences are — again, highlighted graphically.
Pega's challenge is even greater because of our ability to extend rules to the presentation layer — to the UI, to the customer interaction layer, and into business process itself, the workflow. And so, our approach to testing and assuring the completeness — of not just the rules themselves but the entire composite application you'll be building on the platform — is quite thorough.
Basically, our approach is to give you a pre-flight analysis of an application — not just on collision and completion of rules, but also on how healthy are your connections with the different legacy backend systems you're integrating to — to be able to display graphically the relationships of rules and to quickly find the rule problems, so you can click on a hyperlink and drive down to the rule that is causing a conflict.
Our rule approach is to introduce the notion of 'dynamic specialization', which is layers rather than file objects or rule sets that are physical file objects. This gives us the opportunity to create unique, context-appropriate specializations. We like to say, at Pega, that one person's rule conflict is another person's specialization. I have a bumper sticker that says, "IF happens!" We try to incorporate that agility and flexibility into allowing our customers to specialize different instances, or versions, of the rules for specific contexts.
From a Corticon perspective we absolutely perceive this as a problem — we see it consistently when people model their rules. As a matter of fact, to derive that huge value from using a business rules product you need to increase the probability of finding these problems. The beauty of a rules product is that you can have so many rules that you could never design them into a control structure — an IF structure or WHILE loop — you've increased the complexity of the decision ... and therefore increased the probability.
This is a core competency for Corticon. In fact, Corticon's founders founded the company on this problem as one of their two core problems to address in having use rules products in the market. We have a patent that's been issued on our capability to deal with these issues, so I think that people know that this is near, dear, and core to Corticon's perspective.
I would add an additional element (and maybe in a later question create a little controversy for the panelists): In reading Steve's paper on the topic (which I think is exceptional) he identified the issue of 'context', which is that the only way to identify conflicts, ambiguities, etc. for a given decision (domain of problem) is to have a context. Otherwise, any two rules that appear to conflict may actually not conflict. In other words, it may be 'appropriate' conflict ... because those two rules have really nothing to do with each other.
Going into a broad rulebase to do an analysis, in our viewpoint, is problematic if you can't very clearly understand the context in which those rules apply. I think that is a relevant point for you, as consumers of our products, to look at.
At InRule we look at the problem and think it's a serious problem, too. We have a couple of ways we address it. First, we provide some capability that allows you to be able to identify those areas where (perhaps) you have some incomplete condition sets, or where you've got duplicate conditions. You can run an analysis across your rules to see where those might exist.
Another side that I think is important (especially, as David mentioned) is the ability to provide context. You can go into our 'testing tool' to do regression testing ... to provide that context for the rules you're running and that you're going to be looking for. You can simulate the ability to check whether or not your rules did actually execute the way you'd hope they would. Being able to save those results off — store them and then pull them up later — is the second way that we've actually gone out and addressed this problem.
From our standpoint, it's a very similar story. It's definitely a problem; it's addressed with both out-of-the-box capability and methodology with best practices. From our standpoint, on the out-of-the-box front, there are a number of things that you can do: syntax checking, consistency checking, etc. We also have the simulation piece because (and I concur with the folks from InRule and Corticon here) to thoroughly test the rule you need to have context, so you need to simulate in a pre-production environment.
On the methodology side, there has been a lot of progress made. For instance, in the last implementation I was amazed to see some of our lenders in the financial services institution actually writing ten-fold, or even twenty-fold, more test cases than business rules.
Were this not a problem, lenders and financial services institutions would not spend so much time writing so many test cases when they are implementing the rules.
Okay, great. What I heard here was interesting. It suggests that a variety of approaches are being used to address this problem. The approaches, sometimes, are rooted in methodology; sometimes they are rooted in testing; sometimes they are rooted in static analysis of the rulebase. As David pointed out, when it comes to static analysis — depending on the architecture of the engine — it may be easier (or harder) to go ahead and do this referential integrity checking.
Okay, I think what I'd like to do now is to get the audience pulled into the discussion. We've got a great cross-section of vendors up here ... leaders in the rules space. So, let's hear some of your questions ... things you'd like to see them respond to.
|[from the audience]: I wondered if there is, in your eyes, still a boundary between what the tool actually represents as a rule versus what the real business user is able to consume as the rule. And if you do see that it is a problem, does your product have that as a priority to address?.|
That's a great question. I think that is the primary focus: the tools should be for the business user. That's what it is — it's a business management system, getting the business logic out of the code and giving it to the user so he can control, manage, model, define, test it — by himself — with minimal dependence on the technical staff.
I would concur with that approach. But also, in our architecture, there is a single object model for both the rules and the process ... and that means that the visualization of the business rule is in a business-friendly context.
Our customers in insurance, financial services, healthcare ... they own the business logic and they can iterate the changes. So the business object model and the execution objection model are one. That leads to much higher agility; it is what they call a 'model-driven environment' and I think that is the direction we need to go in, as an industry.
I'll be controversial here — particularly given our positioning — and not concur. I think the discussion of having business users own all the business rules is not the right way to position the value that our technologies can bring to the table. I think what we're about is abstracting out the representation of this logic at a level that allows both sides of this discussion — business and IT — have a shared asset, which means that business users could. But I will tell you, most of the business users that I've talked to — and I heard someone else discuss this at a presentation — they don't necessarily want to manage the rules; they want to create a business outcome. They want things to change ... quickly. They don't necessarily want to do it themselves.
Now, from the Corticon perspective, we believe the product we've produced meets that objective. It's fully comprehensive ... so all the logic can be represented at a level that both business and IT can understand, regardless of who does the authoring or the modeling.
I think it's also very important to talk about what percentage of the logic for a given decision can be expressed in a language, or in a form, that both sides can understand. We've heard some people say, "Oh, for that you'd go into Java." Why?!? It's still a business definition. Why should you have to go into Java?
I think understanding how much of the business problem can be represented in this way is a critical part of the discussion.
Java in our system is an assembly language; it's not a tool for business people.
That's exactly my point.
I tend to disagree with not having the value of what we do go to the business users. From the early inception of the original Prolog-/LISP-based rule engines the drive has always been to make it easier to maintain these systems. Early systems required such in-depth knowledge of the rules you couldn't make a change without being the original author.
Ultimately you do want to enable your business to have the agility to make the changes they need, in a timely manner. You have to abstract out going through a development phase ... which you would have to do with any code-based driven approach. Ultimately that's what you end up with in a business rules engine, under the code.
I would agree with you that you cannot expect business users to syntactically code rules ... even in the form of a metaphor like a table or a tree. It's very hard for them to do that. But if you abstract it out — in their own domain, in how they view the business — without a syntax, it can be done.
The gains we've found in the maintenance of very large, enterprise-scale systems have been quite significant. So, not only is it necessary, it's actually becoming the norm.
Yes, I agree with that. Our direction has been to push the envelope, striving to give more power to the business user, and the latest move we've made is to deliver an out-of-the-box application, an application that enables them to be stakeholders.
Now, in one company a stakeholder is going to have read-only access; in another company it may be a stakeholder that does pricing, eligibility, or maybe scoring guideline management. It's the company, the practice, and the politics that really set the limit here.
Get me right. I'm not arguing against that point of view. What I am arguing for is that we don't try to exclude the information technology organization as a relevant part of solving this problem — we need both sides. Frankly, we have as many "business" users (if you want to put a general handle on this) as software developers, developing the rules for their organization. But the big value add comes when that business user sits down with that developer — are they looking at a shared representation of what's happening in the business? Can they both understand it and say, "That is right. This is wrong." ... regardless of who is typing on the keyboard?
I think one of the other things that's important, too, is that you provide the right rule constructs so that the user (whoever the 'user' might be ... be it the business side or the developer side) has the right rule constructs to best represent that particular condition statement, whether it's an IF/THEN statement, or a decision table, or even something more akin to a rule flow. Providing the right types of constructs so that users can easily understand and use them is, I think, an important aspect.
We've found that if you include the process and the user interface in the customer interaction layer, it's easy to get business involved, and they become part of the collaborative process to help push the solution forward ... because they can better visualize the output or the outcome. As David said, they can see what their customers — on the web or self-service portal — would be seeing. They can understand what the impact on operations will be of this business logic, or this business rule change ... because they can visualize it, from Pega's perspective, in a "process world."
To summarize — I heard two things. The first thing I heard was that context is key, in terms of trying to level the playing field for getting all this information reconciled properly. The second thing was that one of the reasons we're in this mess is because, in the modeling market, we have really three discrete activities going on. We have data modeling; we have UML process modeling; we have a relatively new market, business process modeling. The three are still highly independent in terms of how you approach them. So we don't have any integrated kind of way to put the stuff together, and we don't have any kind of levels of abstraction in place, on top of the modeling, to allow the business users and the IT users to come together around it. Hopefully, over time, this will get resolved. But today it's still an issue.
Steve Hendrick: How about some other questions from the audience?
[from the audience]: I have a question. As the complexity and the ever-increasing number of rules grow within an organization, do you see some sort of modularization occurring to manage these numbers?
I think of the analogy to the creation of decision services. Even if you talk about 'insurance' or 'loan underwriting', you know under that umbrella term you have what? You have data validation; you have eligibility; and so on. Similarly, we start to see a number of large corporations defining ... not rules, not rule sets, but units that they call 'decision services'. These are part of the ingredients to make a good SOA recipe. That's what we've seen out there — truly the emergence of decision services, which I think is the term adopted by most of the vendors today.
I would agree with Bruno. I think is very important to create a business service repository that provides a collection of rules with a more tangible access layer.
And it's not just because you want to have one decision service for pricing, per se. This gives you a "unit of decision" that is something you can package and deploy across channels, across geo-areas.
I think that question ties into the earlier one because what you need is an advanced collaboration platform. Thereby, you can visibly see the touchpoints in a rule life cycle. As complexity grows, it is more important to easily determine the key questions of: "What is it that the business person does?" and "What is it that the technical person does?" A tool should be able to explicitly answer, "How do they collaborate?" Insofar as this is clear, you can manage complexity much more successfully.
Our customers are taking very similar approach to the approach I see occurring with 'business process'. When we observe people they don't sit around in a room and randomly shout out rules at one another. They take a logical, top-down approach, and they say, "What is the decision we're trying to deal with?" And then they begin to decompose it.
The obvious lowest level of decomposition is business rules. But, before you get there, they tend to decompose a decision into sub-decisions ... into atomic units. Each of those is a logical grouping — a contextual grouping — of rules that can be orchestrated together to create complex decisions.
These 'decision units' can be combined in many forms. I was talking with someone about their organization's 'change of address' rules. These rules are a generic part of their call center/customer processing ... but they also use their 'change of address' rules in fraud situations as well. So here we've got an atomic decision that's being used not only independently as an atomic decision element but also in combination with another atomic decision to form a complex decision. I think this whole SOA perspective of creating 'atomic units of processing' fits perfectly in this space.
Another focus needs to be on the central repository, or catalog, of rules and providing ways in which you can 'tag' your atomic rules with different types of metadata. Then, at execution time, you can start to determine which ones apply for a given context. This is going to be another important focus for is as a vendor ... so that this type of capability is provided, out-of-the-box.
I also agree with Bruno in his summation of where the industry is moving; we see it moving there as well.
The other dimension, which was just brought up, is actually the asset itself. It's very early for the market space to be worrying about an asset across the enterprise and the reuse of that asset — single point of edit, authentication, access, etc. Where we do see that occurring today is within multiple products around one business function (for example, an 'underwriting function') with multiple products that are involved in underwriting having a set of rules that they want to reuse. We're starting to see that expand into multiple decision areas across the life cycle of our customers ... where they want to be able to access this asset — see it, reuse it, make derivatives of the asset — and not have to rewrite it or worry about it, other than its atomic form of 'rule'.
So while we're still a little bit early in the cycle, we are at the point where we're starting to push the boundaries of treating it truly as a separate asset ... very much like we do with data and databases today.
I definitely agree here. I don't think I've seen any customer doing it intentionally, on a holistic basis. No one is saying, "Let's sit down and let's try to find the services." It's more step-by-step, quid pro quo, "Hey, we have this product line; we want to add this other one. What exists out there?" So, it's step-by-step, and we're still quite early in the process.
|[from the audience]: One of the recurring themes I've heard throughout the Conference has been the slowness in being able to deploy changes to rule sets into production environments and the IT change management that gets involved there. When I talk to the vendors, I hear some of them are going to a 'compiled code' approach, where they deploy Java code directly into application servers. And other vendors take a more interpreted approach, where they put their own rule formats into the engine. I am wondering if the panel feels that there are any differences — advantages or disadvantages to either approach, especially in terms of this production delay issue that everyone's talking about.|
I think that's an excellent question. We strongly stand on the side of 'compile'. Why? Because if you compile at design time — the minute a business user has completed his logic and presses the save button, Java code is generated, web-services are generated and HTML graphical documentation are generated ... all at compile time. What that buys you is that, at runtime, you have a Java-to-Java virtual machine, which means your execution is as fast as possible because no interpretation is necessary.
Moreover, because you are working with Java native code already, the developers love it. They can do whatever they choose with the generated code: stored procedures, integration with other applications on any platforms, etc.... so it's much more flexible. In addition, we have the option of offering hot deployment — load/reload/unload — during production so that there is no interruption whatsoever whenever the operations staff allows these instant updates.
Our approach is dynamic late binding, so everything is at runtime. We have one central repository with, basically, an ability to make and roll out a global change without any physical manipulation of the file. The advantage to this is, of course, that you can also put in the controls to make sure that only the appropriate personnel make those changes. And there is no impediment because the rules, as they're refreshed in the cache, are making sure that you have dynamic late binding of the rule changes.
I'm not sure that we're focused on the right issue, in terms of the latency, because I think all of the vendors offer late binding, in essence, rule change to production ... hot deployment. Instantaneous!
I think the issue is the latency between the creation of the rule and the validation that that decision service is actually valid and reliable. If you have to author over here and then you have to hand off to someone to test over there, and they can find things that cause you to go back to the author ... and you work like this in an iterative life cycle — a classic waterfall — you have latency.
We're all going to late bind. We're all going to have different modes of deployment. The question is: When you make the change can you validate, in the shortest order, that all the logical conflicts are gone? ... that (as tested against data) you have the same result? ... that the result is appropriate? That's where I think we've seen the majority of latency, as opposed to latency in the deployment to whatever instantiation of engine a vendor has.
I'll add some more latency, which is communicating the change to the interface layer and the customer interaction layer. Unless I can see what that business logic change will do to, say, the prompt script that pops up on the CSR desktop, I have to be very careful. That's an advantage of bringing the customer interaction, the case management, the process back into the business rule platform environment.
I agree. All the vendors will have their own bits and bytes ... their own support for hot deployment. But at the end of the day, from a lot of the testimonials I've heard, a company will need both — the kind of deployment that can be scheduled twice a week ... three times a week, and then there's the type of deployment when they have to fix the system in an hour or two hours, in case they have noticed something quite unusual. These aspects of governance are definitely something you need to consider when you look at this deployment paradigm.
|[from the audience]: An omnipresent feature of the business rules management sector is fragmentation, and, speaking of treating the rules as an asset, the dream of many customers is that they could specify a rule in terms that wouldn't be tied to one particular vendor's tools and particular approach. So, of course, there is some momentum towards standards. Let me try to make my question a little bit more personal, as it were. Do any of you feel that you can get a business advantage by doing your stuff in terms that the customer could easily transfer to another vendor's product? In other words, do you see any advantages to not locking your customers in?|
Actually, with MOF and JBoss we did a prototype for the mortgage industry where the two engines were collaborating. So, we are trying to piggyback on some of the emerging standards, from W3C/RIF (Rule Interchange Format). At the end of the day, what do we do? Do we want to grow this small cat or not? Different companies make different choices. I think it's our responsibility as vendors to have this market move into the mainstream and to broaden the use of the technology at the same time. And this is one of the things that is happening right now. The first instantiation I know of is within the MISMO (Mortgage Industry Standards Maintenance Organization) group — the interchange of business rules between a lender and a broker, done with Mark [Proctor] from the JBoss organization.
Something I've considered as a particular problem is the fact that all the engines essentially execute rules differently. So if each has a different underlying algorithm, the ability to move rules from one environment to the next may be helpful to you, but at some point, if (in fact) the engines are different, you're going to have to retest all those rules to make sure they run the same way. So, until we align ourselves to one particular algorithm the transfer of rules from one engine to the next can be problematic and a lot of work for you. From the higher-level, design standpoint, if you design your rules knowing that you are targeting a particular engine, that might be the other way to look at the problem, back at the rule capture point.
One of the things we have been able to do is, at least, emit rules. In our case we can emit our rules not only to our own engine but also to the workflow engine that's part of Microsoft Windows Workflow Foundation. So we have made some steps, in that regard. But in terms of transferring rules from one vendor to the next ... the only other thing I can say is that we have a very flexible SDK, both for importing rules and exporting them.
From Pega's perspective, because some of our rule types go into the customer interface layer, into the case management and process world, we don't find support for this in the rule standards. But we have, routinely, used some of the same standards that have been discussed here to share production rules, and import and export in that environment. We believe that externalizing the assets and making them at least transparent is extremely important to our customers.
It's an interesting setup kind of question that you gave us. The answer for us is, "Yes, as long as it plays to our advantages." We're in business, you know. It's an interesting perspective, not well remembered (other than by people who have followed us for a number of years), but for the first two years of our company's existence, from our modeling environment we deployed rules into ILOG — that was our engine. We delivered ILOG as the engine that came along with the Corticon product set.
I think this really gets into where the value is added, and there's a lot of discussion about the value moving away from the engine — it's moving to the authoring, the creation, and the management ... and, in Russell's case, into a broader perspective of UI and process. So you get to a point where, unless you have a least common denominator in how the rules are specified, you're never going to get to a common syntax for defining rules. I think that's just a holy grail, and it's going to be very, very hard. I know we will have standards, but as always those standards will represent a least common denominator between multiple platforms. And people will extend to create more value.
It's a cycle I've been in for 30 years — from open systems, on. I think we'll move in a certain direction, but there will always be a proprietary flavor.
I do see three areas of fruitful integration on standards that would allow greater interoperability between tools and therefore greater power and flexibility for our clients. The first is between tools that focus on analytics (about 5 percent of the market) and those that focus on sequential processing (about 95 percent of the market). Second, between tools that focus on Java and those that focus on .Net. Third, between tools that do a better job of looking at requirements analysis at a higher level but do not, themselves, execute code and tools that have model-to-execution lifecycle capabilities. These three would be fruitful integrations.
I'm probably the only one here who would say that we do actually integrate across decision engines today. But it is just a small subset.
You mentioned analytics — that happens to be one of the key areas with PMML (Predictive Model Markup Language) ... import/export. That provides the capability for us to take certain types of decisions between products. Experion is not represented here today, but they actually take some of our rules and import them into their decision engine.
So there are viable business opportunities for doing just that. They actually are one-off today; they are very niche areas. The exchange of rules in an open forum ... I'm not sure that the market is really ready for that at this point in time.
And how mature is the database market? Anybody here migrated some stored procedures from Oracle to DB2 lately? <laughter> <Audience member responds> Yeah, but even if we did have it, you'd still have variations. SQL has been around for a long time. Having lived through the pain, it's not necessarily solved by having one standard.
I would add here that there has been a precedent in the industry, by some vendors, to try to build out hub and spoke architectures that will help address this kind of problem. However, from a standpoint of adoption — by vendors or by customers — it has been pretty slim.
Conceptually, it's something that can be addressed. The problem is that, as you can tell, there's not necessarily a great motivation on the part of the vendor community to go down this path ... unless there is some perception that they can 'rule the world' as a result.
|[from the audience]: One can argue that the consumer lending industry has led the world into business rules, and it appears that the insurance industry is poised for a widespread adoption. My question is: Do you agree with that assessment, and (besides those two) where do you see the major applications/businesses that really need to be improved through wider use of business rules?|
I definitely agree with that statement. One of the things we've heard throughout the Conference is that, with the performance and the capability that the platforms have today, we are moving very close to real-time decisioning at the point-of-sale. Even today, you really have the capability to move a lot of back-scene decisioning very close to the point-of-sale. And we can translate that into pricing, billing, opportunities to cross-sell.
So, it could be for a travel company ... for a transportation company ... etc. If you have attended a lot of sessions here that's one of the benefits that people are talking about now: instant pre-qualification, quoting of insurance policies, of course, and the equivalent, other-market cases would be pricing of travel packages, billing at FedEx, and so forth.
I would agree with the statement about insurance. One of the things that we've seen is that there are a lot of existing mainframes out there that have a set of rules ... and there's another set of rules, say, in your front-end agency system. Both are doing underwriting (or eligibility, etc.) ... and they're doing it inconsistently. There's no consistency between what's on the mainframe versus the rules that are in the front-end. So, being able to centralize that decision service to one point inside the organization and direct those two systems back to that decision service is one of the trends that we're starting to see with a lot of our customers.
I'd agree with Bruno, too. The other thing that we're seeing quite a bit, is — for more of a generic standpoint — pricing ... the ability to do variable pricing. As customers come in and the point-of-sale hits, the business wants to be able to apply certain pricing rules very dynamically, so that the customer can get whatever is appropriate for them.
I think there's broad applicability in general. Wherever you make decisions, there's an opportunity for business rules.
Where we've seen the greatest adoption and movement, actually, is related to SOA. Where you have a set of industries with an older set of systems, which have driven them towards looking at re-architecting in order to move forward (like financial services, broad-based financial services, government, etc.), you'll find the fastest adopters of SOA. This is in contrast to (say) manufacturing, which is (for some organizations) still trying to figure out how to swallow the huge ERP investments made in the nineties.
So I think there's broad applicability ... bus especially for those folks who've been on this SOA kick a little longer, since rules fit in so well from a decision service perspective.
I would certainly agree with what David is saying, and clearly a lion's share of Pega's business comes from financial services and insurance. But we also see a growing interest from government and healthcare in rules-driven CRM with process — to be able to reach out to consumers and members and constituents and citizens, particularly in my state (Stephen's state). Mitt Romney, before he left being governor, decided that everyone in Massachusetts should have healthcare benefits, so that young, healthy people should pay for old guys like Steve and I. <Steve: Speak for yourself!> <laughter> And Deval Patrick picked that up, and he laid down a mandate for the healthcare payers to say, "By this date, you've gotta be up; you've gotta be starting to take enrollments; you've gotta be able to handle the rules associated with who can take insurance, in what context." And our payor customers, like Blue Cross-Blue Shield of Massachusetts, went from a B-to-B business, to a B-to-C business, virtually over night.
So, the rapid application development of a solution that could, basically, empower a call center to pick up the phone and deal with people of all ages, at all states of health, and all sorts of different circumstances — with the rules guiding that customer interaction — is the type of urgency we're seeing in people picking up on rules. It's a real growth area — government and healthcare — in our opinion. These are areas where rules are really playing a big role.
The interesting part about our business is that we tend to talk in terms of technology ... horizontal capabilities. We at Fair Isaac have actually built practices around our Blaze Advisor Rules Management Solution, in a broader, EDM message, "Buy vertical!" In fact, it has been our experience that there's been a mass adoption of this across multiple verticals. The hurdle to getting there is understanding the business problem you're trying to solve, and bridging that gap between the technology and the adoption.
We specifically have, of course, a key focus in financial services, but insurance is our fastest growing vertical. Outside of that, I echo: healthcare, government, ... even retail is starting to take off, from a sector perspective. The interesting part is that the adoptions by specific industries do have their specific problems. But it's more to do with the application of the technology than the technology itself ... the knowledge of how to do that.
I agree that everyone needs it, but specifically, you can look at requirements as three circles: one, you've got behavioral, such as use cases; two, you've got glossaries, in terms of semantic requirements; and then you have specific state requirements that pertain to the general contexts we talked about earlier. You can see those three circles as overlapping in a Venn Diagram and in the center you have the control policies that impact all three. The industries that need more control policies than any other industries are financials and insurance, which have the greatest growth.
|[from the audience]: I'd like to take this to a more technical level. Looking at the execution, my application's first two requirements are throughput and response-time. So, will your engines beat my hand-coding?|
We did a scalability test with IBM at the WebSphere Innovation Lab (in Waltham, MA) and we wanted to see what the theoretical limit was. We had 33,000 call center operators, who all fit (conveniently) into Wrigley Field; they all had laptops with Integrated Voice Response CTI [Computer Telephone Integration] headsets; they were taking calls from the entire greater population of Chicago. The test case, basically, fired up against 147 JVMs and tracked the process, the time, and the delay. The database server utilization maxed out at fifteen percent; our ap-server was about 45%; our screen response was .05 percent. Again, this is no dust in the room — no legacy COBOL systems in the backend; it was all pure ... like taking a race car out onto the salt flats. But it was an important test to do. What we learned from it was the importance of tuning and monitoring. Out of that test with IBM we developed autonomic services management, to do self-tuning and set SLAs with the services that we were integrating to, to continually monitor and tune performance.
Our legacy customers — who had grown with us through Assembler, then to PL1, then to C++ — wanted assurance that this new-fangled "Java thing" would actually scale to enterprise load. They needed that assurance. So it wasn't even our technology — it was the J2EE environment that they were concerned with. Slowly, over time — through tests like this (and similar benchmark tests I know my colleagues here have done) — they got those assurances. They're growing in their confidence.
When you start looking at government and you're doing, for example, enrollment for Medicare Part D, you need to be able to help people ramp up very quickly, say, to take massive loads of enrollments. And I'm sure my colleagues have other stress tests for scalability that they can talk about.
We did a rating system for an insurance brokerage company. They had hard-coded the rules for (I believe) business automobile insurance ... rules and calculations. In this particular case, one execution, for a fleet of five hundred vehicles, was taking a minute to come back. When we threw that same case up against our engine, we were able to put it back to them in about ten seconds, or even less. That, to me, was a good indicator that, because we understand all the dependencies of all the different fields and all the different calculations and rules that are involved, we can take the most optimized path for the resolution of some particular value they're trying to get back.
I think that's one of the advantages that rules technology has, in general— we have a lot more meta-data about how that particular decision is structured ... so we can take a much more optimized path to bringing back a decision.
I think that answer is a great one and is generally applicable to probably most of the folks here. We are all utilizing optimized techniques that I think most people who are programming may not be able to match, particularly when complexity invades the discussion. Where you do need to look at the performance issue actually comes on the engine side. That's where you're going to see the differences in performance and scalability ... in the way in which the rules engine processes.
The simple answer is this: Is it faster, or slower, to add a layer? In standard architecture, whenever you add a layer, you add latency. So it goes back to the earlier gentleman's question in terms of what your approach is during execution. Is it interpreted, or is it compile-time? If it is interpreted, then you are adding another layer and thus your rule generation has to be slower.
That's one answer. The other one is this: If you have a type of 'engine' that is a black-box, meaning the rules are less visible in the executions, then you have several performance problems. For example, if you are doing decision tables you have to fire the whole decision table in order to get one result set. Contrast that with doing rule trees, where you only fire that one decision rule tree that is required. Therefore, the execution is going to be far faster because you are processing less logic and you're only executing what is necessary.
There have been many studies done on this, and it's not a simple 'yes' or 'no' as to whether an engine is faster or slower. It's a function of how many rules you have in your search tree and how you prune it. (He's nodding his head; hopefully he agrees.) In other words, if your problem set is ten thousand rules and you want to execute three, a RETE engine will beat you, hands down, every time, right? But if you have straight billing code that you need to calculate, and to do that you have to run through all ten thousand rules, there's no way the RETE engine is going to beat that.
We deal with that this way: We don't really distinguish the execution mode. If you need an engine — if you need backward chaining or forward chaining — click a box and use it. If you want to use executable code, click a box. Go sequential ... go compiled sequential ... go to Java ... go to COBOL. We don't care. You match to what your need is. The engine should be flexible, to be based upon your individual needs. The bigger problem is this: How do you manage and deploy those rules? That's where the BRMS comes in.
Today it's getting to be less and less an issue with the engine. These engines are all running some massive volumes in production; you can stack us up and you'll see it. (Actually, you don't need to stack us up; this has already been done by independent industries.) You'll find that some of us are faster than others, depending upon the situation. But in the end, it doesn't really matter. Throughput's there. We're in production; we're running millions of records in an hour — all of us are.
I agree. All the vendors will have their own benchmark and their own showcase reference. But from a true practice standpoint, when there's a performance issue in an application and all the people are pointing their fingers at the rule engine, guess what! It's often the data layer, for instance, that's the main culprit.
Processing power is increasing — it's becoming cheap. Java Virtual Machine is breaking new grounds. The rule engines are each having their own optimizations. What remains are a lot of issues on the data layer ... the latency of the data layer ... how you design your service. Does the service need data on the fly, or do you marshal all the data at the input? From a practice standpoint, we often try to hone in on the data access layer when there is a performance problem.
I disagree with these gentlemen very strongly. Before I say why, let me state that we ran the same studies, both from an architecture point of view (at IBM Headquarters) and also the exact same tests doing derivatives — highly complex financial instruments. There's no such thing as "it's fast enough." When you're doing real-time transaction processing for trades you can never be fast enough. There are twenty thousand transactions per second, so speed is always a concern ... throughput is always a concern.
Okay, we're getting pretty close to the end so I want to get one more question from the audience.
|[from the audience]: One of the points I wanted to ask about is traceability. A number of sessions over the last two days have talked about the difficulty in relating the rules that are being pushed into the business rule engine back into the document, or any business artifact, that relates to the business rule. How is this managed through the course of the life of the business rules? It's not enough that you start off with the document, if the linkage is lost as you go through the process of managing and maintaining the application. Secondly, none of the tools is currently providing any connectivity to any of the artifacts that are external to the tool. How are you addressing this issue that most of your users are having at this point in time?|
We permit the storage of meta-data along with our rules. In fact, for a lot of our state, local, and federal government applications there needs to be a tie back to the actual law that was passed for the rule that is in production. The expansion of a rule is more than just executable code; it can be any definition around the business policy that you are instituting.
We're not an analyst package; we don't capture off of documentation and map that into business rules. But we do tie back into the original documentation, or the policies that the business uses, by reference in the rule definition itself. You can trace that, all the way through to execution time.
I definitely agree here. I concur with the meta-data approach. Another thing that's happening on our end is trying to break new grounds by fitting directly into tools like Microsoft Word and, in fact, the overall Office Suite. That's definitely a path we're pursuing.
I agree here, as well. In our product the formal model of the rule and the informal representation are tightly bound.
With Pega the business object is the execution object itself. But having push-button documentation — being able to layer in a Sarbanes-Oxley compliance framework on top of the rule engine to do precise business-oriented reporting — is also very critically important. You need to be able to track and trace what the source change was to the business goal or the business initiative.
The correct answer to that, technically, is the "enterprise architecture approach." It's a methodology that's been around now for about five years. It's a way of mapping, in very concrete, clear blueprints, your touchpoints in the rule lifecycle management. It ranges from, from the very top (when you're at a strategy level), all the way to the very bottom. It asks: How do you capture all the different concerns? How do you tie them into documentation and all the other artifacts?
In addition to the logical component, there also is a physical component: If you can generate HTML graphical documentation such that everybody in the world with a browser can see and validate the rules and they can be traced back, via a link, to their source document, then this provides valuable traceability and quality control in your rule methodology.
I'll just say "meta-data."
|Steve Hendrick: Okay — I have one wrap-up question for the panel. I'm going to look for one to two sentence answers; run-on sentences not allowed. I'd like to ask this of each of the panelists: What is the most important thing your company is going to do to drive success in the business rules market? We'll start with Bill and just go down the line.|
That's actually pretty easy, but ... two sentences? That might be tough.
Business rules actually form the platform for enterprise decision management, which (to us) is the ability to automate, to prove, and to connect decisions in the enterprise.
For the front-end, there should be an enhanced graphical user interface providing logical testing and intuitively graphical, easy-to-use tools for the business user. For the back-end, there should be robust, comprehensive, high-performance, scalable Java and web-services code generation.
The classic walking glossy; that was perfect.
I think, seriously, from Pega's perspective, our focus is on customer enablement. Our goal is to help empower our customers to reuse core components because, as you know, (still same sentence) the capability of Pega is to ascend into process and workflow, to extend into enterprise case management, extend into customer relationship and interaction management, and our path to success is making our customer successful, so that an existing customer will come back for more capability.
One, from a product point of view is: Embody truly what model-driven is, in everything we deliver, so that we come from the top into the IT environment. The second — and I believe we continue to prove this out with our customers — is to drive down the cost of creating and maintaining these very critical decision assets, to 2% of what it would normally cost using other methods.
For us, it's to continue to focus on five areas: authoring, storage, management, integration, and executing of rules — to focus on what got us here, which is really that we're built on and optimized for .NET. So we're going to continue to look to find ways in which we can reach deeper into the framework on the .NET side and provide better integration capabilities.
On the one side, we see more product innovations — for IT and business users, and also for the system administrators. The second point is education — helping people to get better education — because I think we need to do a better job; we've just released a BRM Resource Center, with free access to product tutorials; the product even comes with a six-month trial.
Okay! I would first like to thank the audience for a great series of very good questions. I'd also like to thank our panelists for a wonderful and diverse set of responses. <applause> Thank you all very much!
# # #