Business Rules Forum 2010: Business Rules Vendor Panel
Stephen Zisk Senior Manager, Product Marketing, Pegasystems Inc. Brett Stineman Senior Product Marketing Manager, WebSphere Software, IBM Garth Gehlbach Product Management, Senior Director, FICO Troy Foster CTO, Innovations Software Technology Corporation Rik Chomko Chief Product Officer, InRule Technology
Stephen D. Hendrick Group VP, Application Development & Deployment Research, IDC
- From a business rules perspective, what is most important issue today and what is your organization doing about it?
- When you say "business rule engine" could you please state which Zachman Row this is in terms of?
- What is your view on when to use a business rule management system
(or business rules engine)
versus a CEP (Complex Event Processing) solution?
- What is the key, top reason why business rules projects fail?
- What are your product roadmaps that address simplifying the interface to allow business users to manage their own rules and achieve business agility directly?
- In the absence of a tool to treat business rules as something special, what are my alternatives to including the rules in my requirements documents and embedding rules in my code?
- Two questions: Are the vendors trying to standardize so that we can share? Can we achieve the needed flexibility now to have the system change as soon as the rules change?
- Do you have any plans to incorporate parallel processing
into your 'what if' analysis capability?
|Welcome & Introductions|
|[Stephen Hendrick] I'm Stephen Hendrick, and I'm your moderator today. We have here a very distinguished group of product and technology individuals from the leading business rule vendors, and I think if I were to add up all of the revenue (which I do fairly frequently in the business rules space) we're probably accounting here for a representation of about sixty-five or seventy percent of all the money that's in the business rules space. So, we certainly have a great collection of leading vendors that are very accomplished and will undoubtedly provide a lot of interesting insights into what's going on in the marketplace.|
I'm going to start with one question for each of the panelists, after I introduce them. And then I'm going to open up the floor to questions from the audience. So, feel free to start thinking about what you want to ask because this is a unique opportunity that comes around once a year.
From left to right, facing the stage, we first have Stephen Zisk, who is Senior Manager of Product Marketing at Pegasystems — obviously, a great deal of influence over the Pega Rules product. Next to him we have Brett Stineman from IBM. Brett's from the ILOG acquisition, and he's a Senior Product Marketing Manager in the WebSphere brand. Next we have Garth Gehlbach; he's with FICO — oversees Blaze. Garth has been a long-standing product guy in the business rules space; he's Product Manager and Senior Director at FICO. We then have Troy Foster, from Innovations Software Technology — the product there is Visual Rules, and he's their CTO. And, finally, we have Rik Chomko from InRule Technology. Their product is InRule, and he's Chief Product Officer.
I can kick things off with an answer. I think probably the most important issue today is around the recognition of business rules as an enterprise asset within organizations. I think this Conference has done a very good job this year in helping to bring that awareness to a larger audience. The Business Rules Forum, over the years, had a dedicated core group of people who attended — they understood that value proposition and were very successfully using the technologies within their organizations. But business rules wasn't really getting as much exposure as other technology areas. I think the idea of bringing together a conference that looks at business improvement — whether it be through process improvement, using rules for better decisions, or better analytics to understand how the business can improve over time — is really key in communicating how these technologies can work together to really drive true improvements within the organization. For me that's a key issue.
One other thing I'd like to bring up is the idea of a balance between what you're trying to achieve today versus what organizations are trying to achieve in the long-term with rules. Some organizations are very focused on, "I have a project. I need to do it today. What's the easiest thing I can do to get that done?" Other organizations might say, "I need to document everything. I can't start until I do that." This is a second key issue: understanding what is the right balance of "What can I achieve today?" while at the same time thinking forward to "What are my governance requirements?" and "What are the right technologies to help me get through those governance requirements?"
I agree completely with what you're saying. But even before we get to that, what's happening with business rules right now, as far as I can see, is we are moving in maturity from it being a geek/early adopter kind of technology into being a mainstream technology for managing business. If we remember that we're talking about business rules, that gives a clue to what one of our biggest challenges is — involving the business in a way that allows them to understand what they can and cannot (or, better, should and should not) do with the business rules technology.
To give an example: A business rules engine would be a possible solution to almost any problem you could possibly imagine, but it might not necessarily be a better solution than an Excel spreadsheet for some particular kind of problem ... or a business process management system for another particular kind of problem. So, the trick is to understand that business rules represent one aspect of how a business operates and how it wants to manage performance. We need to make sure that when we're talking to the business people we have a means of having the vocabulary (as Ron Ross talked about earlier) and the diagrams and the business-friendly artifacts that allow the business people to see what's going on and to understand what the value of the business engine is.
One of my favorite other problems that we have to deal with today is convergence, which is the ability to bring together a group of closely-related concepts in business process management, business rules management, and analytics. Each of these three disciplines has a great deal to inform the other two and, frankly, I don't think you can have a successful business system without having at least some understanding of all three of those aspects of a business solution.
I'd like to add to what Steve's talking about. I agree with the convergence and that down the road we're going to see these technologies come together more from the standpoint of enterprise architecture as well. From a standpoint of making sure that the business needs of the enterprise are rendered through the IT assets, the only way to get that alignment effectively is to pursue a very heavy enterprise architecture approach to deal with IT. I think Steve's hinting at that, although he didn't explicitly say it.
Steve hit on two points I want to elaborate on. The first was this idea of aligning the right level of business control to the right set of business users. I think that's something that we've made some great progress on, as an industry. But I also think that it's an area where we can stand some further refinement. It's not just a one-size-fits-all for business users and one-size-fits-all for technical users. I think this [providing the right level of business control to the right users] is something that can deliver additional value.
The other thing I want to mention is, as was correctly pointed out, that it's not just about business rules. If you take a look at what one of our key missions should be, it's making business automation smarter and faster. To make it smarter is not just about automating decisions with business rules — it's including some of these other aspects: predictive analytics, simulation & optimization, etc. These are ideas, or technologies, that are helping to make those decisions that you are automating smarter and delivering greater value to you.
I'd like to add something to the topic of being fast, as Garth has mentioned. We now have the fortune to have the tools to the level where we can get in and implement things quickly ... where we can speed up that process, from authoring all the way through to the development of rules. Certainly, in our practice, what we see as great value — tremendous value — to our clients is being able to go in with the rules-based approach, getting the logic out of their application and into a rules-based paradigm, and doing so in a manner that's very quick and effective for their organization.
To build on the Excel paradigm mentioned earlier, Excel is great for some things. But there are many problems out there that you can solve very rapidly by replacing some of those Excel-based solutions where you need the traceability/auditability ... where there's a need to meet those compliancy standards that have been set in the organization. I think the industry as a whole has reached a level now where we're at the tipping point of having really great tools to address that.
I think we still have a challenge of delivering value. We're asking people to make some very big capital purchases. Every person out there, I'm sure, has to go back to their manager and say, "Yes, I want to buy this tool. I may not be able to drop people from my staff; I may not be able to do this faster. But I think I can do this more consistently and more reliably." We have to work harder to make that value more apparent to the purchasers out there.
Secondly, I agree with Garth that there's got to start to be more of that ability to layer in those abstractions. Maybe it's the developer who gets in there first and starts to lay down what that overall rule architecture is — create 'buckets' of rules for the flow in which your rules are going to execute. Then, you can layer in another interface to surface that functionality so that the business user (the business analyst) can come in and be responsible for dropping rules into those buckets. That lets them govern that aspect of the process (decision) that they're responsible for.
I heard a couple of things that I want to comment on. First of all, Troy was talking about compliance. I'd submit to you that business rule management systems are probably the best way to deal with the governance issue because this is the only place where policies are made explicit. So, this is one of the real intrinsic values of business rules.
Then, Rik was talking about both value and rule architecture. From a value standpoint, it will be interesting to see what happens with 'cloud' coming along because cloud has, typically, different licensing models and cloud is new. So, we don't know exactly how it's going to shake out, but certainly the current companies/vendors that are in the cloud today have chosen to take a very disruptive approach to how they've wanted to license technology. I don't know if that will persist, but it will be interesting to see how, ultimately, different licensing models in the cloud shake out and whether or not that has an impact.
And, finally, rule architecture ... I would submit to you that, in the business rules space, for those organizations that have inferencing capabilities, inferencing becomes a very, very powerful tool ... not only to deal with architectural issues but also to ensure scalability of what's happening in the rules space. It's very hard to get that implicitly with any other kind of product.
Okay ... so, great opening! Let's now take questions from the audience. There are two microphones out here — feel free to step up and ask your question.
I'm not sure we could all hear exactly but I did hear the term 'business rule engines' so I think that's a good starting point. If you're thinking business rule engine you're catching only about half (or less) of the value proposition of what we are talking about today. Business rule engines have been around for a long time. They're an essential piece of automating decisions within operational systems.
So they're Zachman Row-4?
Let me try to go on with what I was thinking. The rule engine is just one component of an overall management system, and, yes, you need that piece as part of your automation of operational systems. But you also need the other piece, which is: How do I manage a body of potentially voluminous and frequently-evolving decision logic that I need to be able to govern over time? A rule engine helps you with one piece, but you need the rule management aspects so that you can keep track of what is changing over time. You need rule management for understanding, for sharing across the organization, for reuse of the decision logic in multiple places. You can then deploy to a rule engine. So, you need to think beyond just the rule engine, which is one technical component of an overall solution.
Right. So, rules management will probably take care of the business rules at Row-2 and the information management at Row-3. Would this call for two different rule management systems or one?
One of the difficulties with a lot of these kinds of architecture diagrams — and it doesn't matter whether you are dealing with a matrix-style diagram, like Zachman, or a layer diagram, like an SOA diagram that divides the world into sets of layers — is that they are a little too reductionist. The difficulty you have with these kinds of analogies is that they make assumptions about separability and it being a good idea to separate things. Really, they are more didactic; they are about how to explain the process and what's going on, rather than technology.
From my point of view, there's value in having separation of functionality and separation of understanding between rules and process, just like there's value in having separation between understanding business rules as a discipline (which is clearly a business value) and a business rules engine (or any other piece of technology that you want to have to execute that).
But that separation is artificial. You really have to look at the system as a whole. There's a repository; there's a business user interface; there's a means of searching rules, or of updating rules, or of deploying rules, or of auditing rules in the end. All of those different pieces have to work together. If you separate a person out into an arm here and a leg there, you're not going to have a person anymore. The same thing is true of a business rules system.
We look at it from these four perspectives: the authoring, the storage & management, the execution, and the integration. We can all argue about what is in this or that bucket but, for the most part, these are what we consider part of that business rule management system. I agree — it's hard to separate that into the Zachman boxes or cells.
There was a comment at a session last hour by Carole-Ann Matignon about pattern-based strategies that started down that route. The way that I look at event-handling is: You need to look at the collection of types of information and data (and so on) that you want to analyze in order to come to a decision (or to invoke a business rule, if you will). If that looks like a stream of data that's occurring over time — similar bits and pieces of data — and you have to correlate on the time axis among a collection of data then what you're dealing with is a complex event processing engine problem.
On the other hand, you may have what looks like a collection of multiple different kinds of pieces of information that are coming from potentially different sources ... something you want to collect together to form a unified data picture. If you need to be able to inference against that data — inference against that particular collection of information — then you're dealing with a rules engine.
I would add, by the way, that if you're dealing with a collection of data with persistence and presentation of the data across multiple different parties (even though it's the same collection of data), then you might be dealing with something else, called a business process management system. That gives you independent resolution of individual pieces of the picture.
So, you really have to start by looking at: What are the information elements that I'm trying to get, and what kinds of questions am I going to ask about them: Is it time-related? Is it inference-related? Is it work and process related? Then you can decide what kind of solution is going to be a best fit.
I think there are definite use cases that we're seeing that combine event processing and rules technologies. The way we describe it is "detect and decide." Event processing tends to be very good at detection of these streams of data that are coming in over time and, potentially, across multiple different systems that you're trying to look at in a holistic way.
But they tend not to be that strong in What is the right decision output to make? based on that detection. They're very good at generating alerts — maybe generating some very simple actions. I think CEP, or event processing in general, tends to still be back in the state where business rule engines were a number of years back: a lot of very technical tooling; not a lot of governance or business user management capabilities. They are moving that way but now tend to be very technically oriented.
We have several customers who, recently, have been looking at the combination of these technologies. One is a utility in the U.S. that is installing about two million smart meters across their electrical utility network. Each of those meters sends back data every 15 minutes. That customer is using event processing to take a look at that data, both from individual meters (as the data comes in over time) as well as across meters within a specific area or region, to determine if there's a problem that somebody needs to take a look at.
They did successfully implement that, but what they found was that they're almost too successful because their network operations people were getting inundated with alerts, some of which could actually be ignored. They found that they had to have somebody take a look at an alert and say, "Well, is this really something that I understand? Do I have to do anything with this? Do I need to get somebody in the field to take a look at this particular meter (or block)?"
Now they've purchased a business rules management system so that they can do a first-pass filtering of those event-based alerts. Based on the expertise of their operations folks, they can actually create some business rules ... business policies that the system can use to look at those alerts and say, "Okay, I can ignore this." Or perhaps they can decide that they need to go back to the event processing system and raise the level of monitoring that needs to take place. Or they might decide, "I need to hand this off to a person" because there really is value-add in having a person take a look at this particular kind situation.
When you look at where CEP came from, it was mostly the capital market group and their ability to look at large streams of data and, as Brett pointed out, be able to detect that something was actually happening. What we see quite a bit, in reality, is that not all problems require this huge amount of capability to have that kind of throughput.
We have seen a number of our customers adopt us for what might be considered an event-processing problem — watching a stream of events, being able to correlate them, looking for the pattern that they're looking for, and then being able to take an action on it, all in the same set of technologies. We call it "decision event processing" because not only can we process the events we can also make decisions on those events that are streaming through.
So, it's hard for me to say that you can draw the line at one point and say, "This problem is for a CEP vendor, and this other problem is where the business rule engine vendor picks up." I think those lines are starting to blur as all of us (the vendors) are starting to add more functionality into our tools to get more into handling those big event problems in a more efficient way.
I think the boundaries are fuzzy on both sides. You'll see some rule capability on the CEP side and you'll see some event handling on the rules side. I think you need to look at this from the standpoint of: What are requirements of the use case that you're trying to attack? Then you need apply the solutions that are appropriate. It's not a black and white situation where, if you've got anything that has to do with detection, you're going to have to have a CEP on the front end to accomplish that.
There are also issues of state. Typically, rules tend to run in a stateless type of situation, right? You have a lot of overhead when you bring statefulness into the picture. Can you implement a business rules management system to deal with that? Yes, and in some cases that can work very well. In other cases, you may not want that. So, there are architectural reasons, more than just pure throughput. It could have very low throughput but if you're looking over a long period of time ...
...you have to hold on to billions of pieces of information.
And your engine may or may not be able to deal with that.
That's where rule architecture comes into play.
And I would say it's not clear which way it should be, even when it's a very high throughput system. We have installations such as the Swiss stock exchange, which is running literally tens of millions of transactions a day through the system. This is a business rules system — focusing on anti-money laundering, market abuse (those types of things) — that is looking at each and every transaction that's flowing through the exchange. We don't have a complex event processing type system in front of that. So, it really depends on the situation and the business scenario.
It's interesting to note that there is an architectural aspect to CEP, but it's not an absolute requirement. That's because of all the overlap that we're talking about. You'll see so-called "pure" CEP vendors say, "You can't do problem X without a CEP engine." And, of course, we know that's not quite accurate.
I would add, though, this sense-and-respond interaction pattern is a very important one — one that's going to be driving a lot more activity in the industry. This idea of being able to identify a pattern and look for a match against that pattern is, of course, a very important thing to want to do. This is because it helps you anticipate what's going to go on a lot better than just a binary function of either "It's right, or it's wrong." This can help you understand that it's moving from right to wrong. And so, in fact, you can apply analytics to great effect — you are able to look at the goodness of fit of patterns that are occurring against reference patterns that are known. Business rules are very effective from the standpoint of operationalizing any kind of activities you want to take as a function of being able to sense and respond to these CEP patterns we're talking about.
So, why, specifically, do business rules projects fail? My short answer is that they get oversold on business benefits and people get mired in the technology, so you end up with somebody claiming that the business rules engine is going to do X and then getting lost in technical routs whereby the rules engine does not actually deliver on whatever the promise was.
The fix for that ... you didn't ask for a fix but I would like to provide one, if I may.
I was going to ask each one of you to provide a fix, but a fix for somebody else. But, okay....
So, the fix for that is to have a very clear understanding — in both the business side and the IT side — of what the scope of the project is and what the measurement of the success of the project will be. That way you can architect it to make sure that it actually succeeds.
I don't know what you're talking about, about all these failures, personally. <laughter>
I would say that ignoring best practices is, to sum it up, the biggest issue around organizations not achieving the value they expected. With any technology, there's usually somebody who thinks that they can just install it and try to build something that will be great. Yet, with any technology, there is a level of understanding that is required as to what is the best way to implement and use that software.
So, a clear set of guidelines around best practices definitely helps our customers in terms of success. We have something called a "Quick Win Pilot." This is a way of scoping a very specific improvement to a process (or application or part of the business). It's time-boxed. Our Services people will come in and help the organization build out that rule application (that rule service) and show them the best way to do it, show them how to build a governance plan around it, and then hand it off. That gives the customer a way to understand what is the right way to implement the software and what is the right way to maintain it over time.
Let me try to keep this simple. The end goal is to be able to implement a decision-driven application. So, where does it fail?
It might fail in terms of just focusing in on the tool technology alone. At FICO, we're in the business of not only providing a decision management platform, but also we build applications themselves. In doing this, we not only offer completely decision-driven applications (turnkey, if you will) but also we have learned, through that process, what it takes to be successful in implementing decision-driven applications. We've baked that into our methodology. So, I think that some of the components that will lead you success are: having a strong methodology that goes with the technology, having domain expertise, and having some experience in implementing these decision-driven applications.
For us, the emphasis is on avoiding analysis paralysis and getting an initial success in definite-scope project. Our paradigm's a little bit different in that we're based on a business object model, rather than on inferencing, so we come at it from a slightly different tack. We feel that the best way to get success is to get the software working on a very specific scope and, like the other gentlemen are saying, to have your requirements locked down between IT and business, and then to move forward.
I like the idea of a prototype — we call ours "Get a Prototype into Rules." I also think it is really important to scope the effort. I can cite one customer who went in and pounded rule, after rule, after rule, after rule, after rule ... and then one day turned around and starting shoving huge amounts of data at the system. They never took a time-out to worry about the scope of what they were trying to do or even to understand what the performance considerations were. We were able to help them understand the right way to do it, but they were definitely headed down that path of failure that (unfortunately) a number do end up on.
So, failing to take an early look at performance is a problem. This early look at performance is critical because you can't just go in and start to pound volumes of stuff in and expect that it's going to work without some best practices.
And "performance" in both senses of the word ... not just "How busy is my CPU?" but "Am I actually getting the results, in scope of the entire project?" rather than just the decision that I was expecting.
I'd like to go back to this idea of Enterprise Architecture to help summarize what the issues are and how to deal with them. In EA, you, of course, understand the business architecture, the information architecture, the solution architecture (which is the application), as well as the technology architecture. You have to have a plan to deal with all of that.
And you need to then intersect the enterprise architecture plan — either in the small or in the large — with the whole issue of life cycle: understand the requirements, have some criteria for success, and then be able to measure it operationally so you can either identify success or not. And, if not successful, then figure out what to do to fix it.
Ah, roadmaps — this could be a lo-o-o-ng answer. Wait, I have a Powerpoint presentation, just give me a second here. <laughter>
I think each of us can talk about business user participation in rule management in some detail. It would probably be helpful to come visit the various vendors and you can see exactly what each has to offer ... because I think each of us is going to be able to say we have some form of business user interface/environment/capability. It's an essential piece of what we're talking about, right?
We're talking about "business rules" so you definitely want to have the ability to get away from the traditional application programming paradigm, which is "Oh, I have a requirement" and so send an email (or document) to some IT person and hope that they build it right and eventually get back to you.
For our product — the WebSphere ILOG business rules management system offering — we have an interface called "Rule Team Server." It's a web-based environment; it works very seamlessly with our development environment, so you can take what's built out of the object model (which contains the vocabulary and the definitions of how you want to create rules) and put that into the Rule Team Server environment. From there you have business users, with different levels of permissions as to what they can see and do within that system. Those users can then actually maintain those rules over time.
FICO has done some innovation in this area and we think we're unique in allowing a web-based user interface that is wholly customizable. Each set of business users gets the most granular level of control — what's exposed to them, what they can do; what they can see.
I'm not sure that's unique.... but I'm glad you have it. <laughter>
There are a couple of other things to think about, if you're going to empower the business user. We've heard a little bit of it mentioned here. One thing is that you have to be able to have business users brought in on both the small, early projects — like the POC [proof-of-concept] project that you were talking about — and on the larger projects. So, the mechanisms for involving the business user have to scale with the project itself.
I agree that we need web interfaces — and, in fact, we have a complete web-only interface for both the business users and the technical users in a shared environment. But I think there are some other tools that are also needed. There need to be tools specifically geared toward collaboration, toward management and visibility of requirements and use cases, toward the vocabulary itself, and toward allowing business users to play around with the rules that they're working on. The classic cycle of "Okay, I've finished this rule and now I hand it off" and somebody else deploys it and somebody else tests it, and then two days later I get back a message that says "Your rule didn't work" sounds very much like the old Sixties problem of submitting my deck of 029 cards to the services god and hoping that my printout comes back to me the next day. You have to have built-in testing mechanisms, built-in what-if kinds of mechanisms — completeness, pointing out conflicts to business users — and you have to have all the controls that you were talking about for locking things down in very selective ways so that the business user can see exactly his task is in this.
To add to what Steve is saying about the testing environment... Our product is Visual Rules, aptly named because it's a very visual environment, very intuitive. As part of the integrated testing tools, we've made it very easy for business users to create test cases and to really work with those test cases ... to see the actual execution statistics around the rules as they run those test cases within the environment. It's been made very easy for business use and yet still be very powerful for the IT side as well. Over the years, it has been extended by adding the web interface. All these things have added up to improved functionality for the business side and added value for them to keep the empowerment tilted toward them as much as we can.
One of the things we did with our product (in the next version) is to extend out the authoring interface so that it is completely configurable by the user who is setting up the problem. You can get in there and really constrain things.
The permissioning has been there for quite some time as well as some of the other constraints that you can put around what users can do. But now they can completely reconfigure the authoring environment, if they want to, to make it a bit more accessible to the business user.
That's one area in our roadmap that we're working toward; another is our testing environment. We have always had a testing environment, where you can enter a bunch of data, hit a 'test' button, and see what the results of the rules look like. We've added in a much deeper visual tracing capability, so you can step through the rules and see what the state (or data) looks like as it hits every step. We think that, as important as it to allow users to enter rules, it's important for them to be able to test the rules and trace them and understand where they may be going wrong logically.
Finally, it is important to have someone who wants to understand the logic and take accountability for it, as a business analyst. In some engagements we have worked with groups who've said, "Well, we don't necessarily want to maintain the rules ourselves; we'd rather have Development still maintain them. We just want to be part of the process." So, visualizing and allowing people to report on the rules — having that transparent access to the rules — is another area we've been focusing on.
We have something called "Rule Solutions for Office," which is a way to extract rules (or rule sets) from the repository and put them into a rule-doc format that can be read by Microsoft Word or Excel. But you're not just in the Office tools. There's actually a full understanding of the underlying object model so that when you're writing or changing a rule within Word (for instance) it gives you guidance — tells you what the applicable vocabulary is, where there might be a problem, (etc.). Then you can bring that back into the repository, and it has a full understanding of what's changed. You can do the same thing with a decision table in Excel. For a business analyst this is a very useful feature and something that a lot of our customers like— especially somebody who is an occasional user of the rule management system, or someone who just needs to be involved in very specific types of rule maintenance.
Those kinds of pronouncements represent a little bit of confusion and a little bit of carving out. Let me see if I can address both the confusion and carving out.
The carving out piece is that we are, after all, business rules vendors and this is a business rules conference. So, there's a tendency to be absolutist about something that is actually a little more subtle than that. What they're really talking about in both of those cases — rules in code and rules in requirements — is separation of intent. The idea is that you need to be able to identify what a business rule is and to understand what the impact of that business rule is going to be. I would claim that that statement is true, whether you're doing a business rule on a piece of paper, or in Microsoft Word (or Excel), or in a business rule management system, or in a Cobol system. The point is that you have to be able to identify and manage that rule as an object.
The danger if you don't do that — and the reason that the comment tends to get absolute in that way — is that you will end up with a system where you're looking to make a change to a business rule and you can't find it ... or you find one instance of it but there are twenty-nine other instances of it scattered through the code.
So, the intent of those kinds of statements is to say: Gather all of this stuff in and be able to manage it, see it, view it, understand it, in order to know how to make a change to it in the future.
I don't think I'm going to say anything much more insightful than what Steve has just said. But I do have a question — was there a stone tablet involved when these pronouncements were made? <laughter>
Requirements gathering is definitely important. When you're trying to figure out what your vocabulary terms are, you want to be consistent. You don't want to end up with three different versions trying to identify the same thing — "customer," for example (or whatever in your domain is important). So, requirements gathering, obviously, is important and this goes back to failure, right? Good requirements gathering at the beginning will help you in good implementation down the road.
The first step toward implementation is identifying those rules — sounds a lot like an AA meeting ... the first of Twelve Steps to Business Rules: identifying, managing in a central location, and then working through implementation.
One of the things I'd like to add here is that we're very much locked into a world today where app-dev is very process centric. That's the way it was a long time ago; it sort of remains that way. The idea of using business rules as a core construct for app-dev is still relatively new from the standpoint of its penetration in the marketplace. But this whole notion of decision-centric application development — which is where a rules engine is the core of how you make decisions about what activities to perform — has a great deal of relevance and probably an increasing relevance when compared to where we've been. I think that's one of the reasons you hear statements here, as you noted, where there is a very intense focus and a real fervor behind what rules represent ... because it does represent something very, very powerful in the industry today.
From the standards standpoint, it's very difficult. Let's say you define your business object model (or whatever data structure you're running against) — certainly you could use one vendor's API and write a bridge to transport to another vendor's structure without much loss, from a structure standpoint. Even the rules can probably be transported.
Where it starts to get tricky is that most of the algorithms that execute the rules are likely to be different. I know that ours is different than probably everyone here on the panel. So, you're going to need a fair amount of testing once you do that transformation, to our engine or back to their engine, based on that representation and the way we are executing the rules (versus the way someone else might execute rules). That's just one problem.
I think it is possible to do that bridge, through a coding effort. We've looked at some of the standards that are out there, and it seems like there is enough variation between our tool versus the other tools and even the standards. So, at this point, it is difficult for us to say that we are going to support a particular standard (or not).
Somebody from IBM will be speaking on Thursday about the RIF standard (Rule Interchange Format). His name is Christian de Sainte Marie. I would encourage you to go to that talk. He will talk about that standard, which is a semantics standard — he'll talk about the pros and cons of it.
I'd say that with any standard the issue you have is the implementation; it's not black and white. For example, take something as simple as web services. Consider how each vendor has implemented web services. Even though it is a standard, the implementations are not identical.
I wasn't exactly sure what the second question was asking. It sounded like you are asking about the business rules value proposition — about flexibility. But I'll let somebody else take a stab at that.
One quick bit on standards and then I'll talk about flexibility. The older standard, JSR94 for rules engines, doesn't say a darn thing about the semantics of the rules themselves. It only talks about how a Java engine can communicate with the outside world and, essentially, establish a pipe and establish synchronization and make decisions (and so on). There are plenty of standards out there, but they don't really help with what you are asking about, which is moving things from one place to another. Given that we've all sort of agreed that business rules management systems are much more than just business rules engines, there would be a lot more lost, in moving from one engine to another, than just the rule itself. There's a whole ton of meta-data; there's a whole ton of visual and relationship information that doesn't necessarily translate that well when you're doing a standard.
For the other half of the question — the flexibility — my take on that is that Ron may have been mis-speaking slightly. There is value in having business rules as a separate object. Whether the rules are in a separate environment or not is a separate issue. Having business rules as a separate object is intended to reduce the need for change in other parts of the system. You have to have enough flexibility to deal with things like model changes (adding, removing, or changing objects that are needed for decisions). But the intent of a business rules system is that you can hide a lot of the change inside the system and reduce the need for change in other parts of the application.
You're not necessarily "hiding" it; you are isolating it.
Isolating — yes. Hiding is a bad thing.
You're externalizing the rules in a business rules system that's meant to handle change.
It sounds like what you're talking about is what I mentioned earlier about decision management. It's not just about business rules and automating a decision. There's a broader perspective in terms of making better quality decisions ... getting more precision in those decisions. The technologies of predictive analytics and optimization and simulation — working with business rules management — come together to enable this better decisioning and form this complete decision management platform.
In the product line of FICO, do you have actually something like a parallel processing engine that can look at multiple scenarios? If I give it business conditions A, B, and C, can I run them in parallel?
When we start talking about analytics and rules, there are some definite synergies here. As to whether you can do some very specific things, I think you need to visit each vendor here and talk to them about that.
In general, what I would say is this: We've been talking about event processing; we've talked about analytics; we've talked about rules; we've talked about business process management. Now, I can go out and buy a Swiss Army Knife that has ten different tools in it, including a saw, but I'm not going to use that for my carpentry projects. And there's a reason for that — that's not what it's designed to do. So, you need to think very carefully, when you're looking at technologies. You want to look at the right set of technologies ... make sure that they work together. You don't necessarily want something that can do everything because you may end up, in the end, with something that does nothing.
One thing I'll add is that the intersection of rule processing, event-driven architecture, and analytics does have a lot of potential as we go forward. In my mind, I haven't seen that intersection come together all that formally yet, although I know the vendors are working on it. There is immense power from the standpoint of what that represents because you would have the ability to break apart activities and therefore pursue parallel processing in the way you've mentioned.
|[Stephen Hendrick] I think we've pretty much come to the close of our time period. I'd like to thank all the panelists for their participation today. Let's give them a round of applause. <applause>|
# # #