Business Rules Forum 2011: Business Rules Vendor Panel

Business Rules & Decisions Forum
Business Rules & Decisions Forum As presented at the Business Rules & Decisions Forum About Our Contributor || Read All Articles by Business Rules & Decisions Forum

Panelists

Dr. Setrag Khoshafian Pegasystems Inc.
Tom Debevoise Bosch Software Innovations
Rik Chomko InRule Technology
Michael Pellegrini JBoss
Jean Pommier ILOG WebSphere, IBM

Moderator

James Taylor   Decision Management Solutions

Topics

  1. Increased Business-IT Collaboration
  2. Impact Visibility for the Non-Technician
  3. Transparency
  4. Managing Rules Migration
  5. Separate Support for Business Knowledge Development
  6. Versioning
  7. Legacy Modernization
  8. Why Another Repository
  9. Losing Focus
  10. Looking Ahead Twelve Months

.

Welcome & Introductions

[James Taylor]    Okay, we're going to get started.  Thank you for coming.  My name's James Taylor, and I'm going to be your host for this vendor panel today.  For those of you who don't know me, I've been in this space for a long time — coming to many of these Business Rules Forums over the years — and I’m the author of this new book "Decision Management Systems."

One of the things that comes up in Decision Management systems is the issue of "agility" — how do you build truly agile systems? — and so, we're going to kick off today's panel by talking about that with a number of folks from the vendors here at the show.  Let me introduce them — starting nearest me:  first, we have Setrag, who is the Chief Evangelist and VP of BPM Technology at Pega; next to him we have Tom, who is Director of Federal Sales at Bosch Innovations; then we have Rik, who is the Chief Product Officer of InRule; then we have Mike, who is the Senior Technical Marketing Manager at Red Hat; and at the end of the table we have Jean, who is a Distinguished Engineer from IBM, from the ILOG group.

We've seen a dramatic shift in the business climate in the last few years, and so we really need to build systems that are fundamentally more agile.  But what does that take?  How do we get business people engaged in this?  The idea of agility has been a strength of business rules and business rule systems for as long as they've been around — but what's changing? ... what's new? ... what's exciting?

I'm going to ask a few questions to begin with, and we'll go through the panel.  Then (hopefully) you in the audience will have other questions — either about business agility or about business user involvement ... or about anything else that you've come across already so far at the show.

So, with that I'm going to get started.

return to article

Increased Business-IT Collaboration
[James Taylor]   I'll start with Setrag and we'll go down the panel.  So, Setrag, what aspects do you see in business rules management systems that are really driving increased collaboration between business and IT people?  What's new and exciting?

Setrag

Of course, there is the platform side where you have more and more visibility — where business and IT can come together and collaborate with the business rules and the decisioning constructs.  This gives us a kind of lingua franca (common language) between business and IT.

Another thing we're seeing is a transformational aspect between business and IT.  Rather than having separate silos in which each of them works within their own organization, we're seeing new methodologies — approaches where business and IT are coming together in the same agile team, building solutions together.  That's a major trend we're seeing.

Tom

I would agree with what Setrag has said.  In addition, from our viewpoint, we see a lot of new, emerging metaphors — metaphors of process modeling, event modeling, and decision modeling ... new ways that they can work together. 

We're also seeing emerging patterns in certain types of applications — for instance, workflow applications, document-centric applications, and rule- or event-based applications.  We have about eleven of them identified in our area.  We're finding that organizations are becoming more multi-functional in these areas so that they can do not only rules modeling but also process modeling and event modeling.

Rik

I was at a conference last week where I saw James Whittaker speak — he's the lead tester for Google Chrome.  One of the things he said struck me as interesting:  "Ideas aren't worth much, but prototypes of ideas are worth a lot." 

That's one of the areas we're focused on — making it easy for the developers and the IT analysts to sit in the same room and start working on rules together, in a collaborative way, rather than having this big infrastructure that has to be there for them to get started ... to be able to write a rule, test it, write a rule, test it — and really work along that progression.

Mike

I'd like to expand on what Tom was talking about.  One of the things that we're taking a look at is the coming together of rules with business process, and also with things like complex event processing and event stream processing.  One of the things that we're looking to do is to unify that platform. 

Unifying that platform would allow customers — both on the business side and on the IT side — to solve real business problems, rather than just deal with the integration problems of how all these technologies come together.  We want to allow people to actually focus in on "What are we trying to solve?" — "How are we trying to solve it with regards to knowledge management?"  This would then let them get really deep within the technology by providing them that common runtime — that common platform — regardless of the various types of ways that they're modeling their decisions and their knowledge.

James

Okay — how about you wrapping it up, Jean?

Jean

Yes, I agree.  <laughter>

It's actually pretty amazing that ten years ago the discussion was more about features — which features we were offering ... things like, "We've got Office integration." or "We've got these web-based tools."  But now everybody's got some of each of these. 

Today, half of the equation is on the skills side.  How do we get more people to understand the benefits?  And that's both IT and business.  It's a battle to come out with the process management and rules management paradigms and then to get the organizations to adopt this agile programming approach.  That's still a giant leap for an organization.

return to article

Impact Visibility for the Non-Technician
[James Taylor]   All right.  We're going to continue with Jean on the next question.
     When I work with clients, one of the things I've noticed is that if you want to get non-technical folks to make rule changes, you really have to give them tools that let them see what the impact of that change is going to have on their business.  This is not a technical impact analysis — not which classes are changed (etc.) — but a very business-oriented impact analysis.
     So, Jean, let me start with you and then we'll go down the panel in the other direction.  What do you see changing in the way business rule systems work to make it easier for non-technical people to see what impact they're going to have?

Jean

That's a good question.  What we have seen over the past years is more rule types (rule artifacts) — the decision table, the language, the decision trees, the rule flows.  So, from a technical standpoint we have a lot of artifacts; now we have to teach people how to pick the right artifact at the right time. 

This is called "rule modeling" and it's really key.  Everybody talks about "data modeling" and "object modeling" but the rule modeling itself means picking the right artifact.  This is something that is very important when you want to align the business and IT.

Mike

We see this in two distinct ways.  One is not allowing the folks to make things that are going to have an odd impact on the system.  Part of that is trying to use things like decision tables ... evolving decision tables to a case where we can start to constrain values.  For example, we're not going to create a rule that says "the person is over the age of 200" — today that's not something that's going to be feasible.  So, the first thing is trying to control the amount of data that can be put into the rules.

A second thing we're exploring (and doing some pretty neat research in) is on the subject of ontologies — being able to map ontologies back into the rules, from an impact analysis standpoint, so that we can have full visibility:  "If I do make this change, what's going to happen?"  "Where are these rules being leveraged?"  "Where processes are being utilized?"  (and so on)

Rik

One of the things that we look at is how do we change that problem a bit ... to have people look at it and say, "How do we reduce the impact that the author can have?"  How do we provide ways to sandbox a user into certain modes of authoring so that they are really only addressing the parts that they really need to change?

Because all our tools are really flexible — you can do a lot with them — there's a need to reduce those variability points and to give people just those things that they really need to change in their business.  That will help to reduce the impact of the changes that they are making.

A second part that we often look at is some of the empirical testing that can be done for impact analysis.  For example, if you have ten thousand mortgage scenarios that you can run through, you can baseline that, make your change, and then run them again to see what the impact is on the outcomes of your scenarios.  This is another area that we think is a direction we want to head in.

James

So, making that something the business can do for themselves, rather than a technical test bed?

Rik

Yes, right.

James

Tom?

Tom

I would say, "Ditto" — I agree.  To get on the soapbox a little bit:  When you consider the effort involved in creating a rule, almost as much effort goes to testing and verifying it.  So, test data management is very important — critical for visibility and agility of deploying your solutions into the environment. 

We have many large projects — in banks and credit risk rating — where there are huge decision models/rule models.  We need to be able to go in and make changes to very fine-grained, accurate controls at a very rapid, high pace.  That means we've really focused in on doing the test management that's critical to being able to manage very complex programs.

James

Setrag, wrap it up.

Setrag

Yes, I agree with what has been said.  And there's also this aspect:  If you look at the business — the non-technical users — they're very familiar with spreadsheets, where one cell depends on another cell (etc.).  What if you could provide them with something like that to give complete visibility of the dependencies between rules ... something that is very easy for them to understand, to do the testing, and to look at the impact?  We do that.

Another aspect of the non-technical understanding of the impact of the rules is visualizing the results of the execution of the rules and the impact they're going to have — simulating with what-if scenarios and then making changes to their decisions and strategies.  Do some what-ifs and then ask, "How can I impact my operationalized rule systems?"

return to article

Transparency
[James Taylor]   All right.  I'm going to ask one more question and then we'll go to the floor so now would be a good time to start thinking about what your questions are.  I'm going to begin with Rik because he's sitting in the middle — otherwise, he'll feel like he's always third.
     So, Rik, one of the things I really emphasize in my book and in the work I do is this need for transparency — both design transparency ("Can I see how I built the thing?") and execution transparency ("Can I see how I decided, for a particular transaction?")
     What do you see going on that might increase that transparency?  How do you see people using that transparency?  What's new and exciting around that sense of "What can I do with this transparency, now that I have it?".

Rik

It's definitely at the point with a lot of rules technology where a user can go in and create a schema, create a bunch of rules, and then push those up to a service to execute them — all without developer involvement.  The challenge then becomes how to manage that from a security standpoint and from an IT perspective.  We need to be able to provide the right tooling from an operational side so that we (as a collective group) don't get us all into trouble.

But the neat thing about services done that way is that, because we have access to all the metadata, there are a lot of interesting things someone can now request of the decision service.  For example, if I want to I can say, "What are all the rules that make up this particular decision service?  Give them back to me as a Word document or an HTML report."

Even more exciting is that you can request an audit directly from that service.  After running the service — maybe I don't understand why it ran the way it did — I can request an audit trail and say, "Hey, give me the feedback on why it made these decisions." 

All this comes from us taking the time to capture the metadata and to set up the engines to work against that metadata with the data pumped up against them.  That's what I find exciting about transparency.

James

Tom, what about you?

Tom

I want to go back to the multiple metaphors idea.  If we find in our new engagements that they're overly-focused on one metaphor — for example, doing everything completely in decision tables and not focusing on the rules side — then we find that you start to lose transparency.  So, there needs to be a balanced approach across your requirements — for your toolset you need to be using the right tool for the right job. 

If you read through the Agile Manifesto, remember that its emphasis was on:  'conversation, development'.  But we don't have 'conversation, development'; we have 'conversation, model, development, test'.  In other words, we are moving away from the pure agile.  We have all these steps that were not in the original Manifesto itself.  So, the idea is that once the organization adopts the metaphors — whether it's decision graphs, decision tables, BPMN for process steps — those need to be incorporated into their development cycle.  That's what gives the real visibility.

James

Okay, Mike, you can go next.

Mike

When I hear about transparency, what I think about is this:  If we're going to defer our thinking to a decision service it should be fully transparent.  I shouldn't have to worry about what it's doing.  As things start to mature and we put more of our trust in these decision engines — and I'm talking about this from a consumer standpoint ("I'm a consumer of this decision service.") — then we should make them as transparent as possible, given the fact that we can have standardized terminology, standardized contracts of these things that relate to the business, knowing full well that on the backend we can have full auditing/full accountability for what's happening.  That way we can always back up a decision.  

Making things as transparent as possible is always going to be the primary goal.  But another part of that is, from a commoditization standpoint of rule engines, to make them widely applicable to a whole host of applications, not just to special projects.  That's important from a cost perspective; a lot of times rules engines get designated to these "special" projects.  What we want to do is bring decisioning and external decisioning services to a whole host of applications that may not have considered this simply because of a cost factor.

James

Setrag, I'm going to make Jean go last.

Setrag

Okay.  Talking about transparency, it makes me think, "When is business logic not transparent?" 

One of the main pain issues that we deal with is legacy, where the business rules — the business logic — is ossified; it's not transparent.  So, increasingly we're seeing in the solutions the need for transparent, agile, dynamic cases, where not everything is predetermined.  This applies not only to what I've already mentioned but also to unified policies/procedures and unified decisioning with processes.

When you provide this transparent agility layer on top of your legacy systems you have the flexibility — dynamically — to define whether it's processes or business rules in this agility layer.  We find that very effective.

James

Good.  Okay, Jean, you get the last word for this question.

Jean

The first idea that comes to mind when you talk about transparency is a visual image.  And that makes me think of what we were talking about in the '80s for UI development — WYSIWYG ("What You See Is What You Get").  That's something that I feel is very important in this forum today because I see a lot of people doing business analysis over and over and over, and it's not really what you get at the end if you don't do the analysis.

Also, we put a lot of emphasis on getting the rule artifacts separated from the decision service.  That way they become auditable as objects, and they can run on Windows to mainframes.  People associate 'mainframe' to legacy — and, yes, there's a lot of legacy code on the mainframe — but you can run modern things on the mainframe as well.

Then there's the recording of changes — there is a term that we have not used yet:  "governance" (rule governance).  People associate agile to chaos, but with rule governance in between we can do something agile yet controlled.

James

Enough governance, but no less and no more.

All right — so, I'll take questions from the floor.  Some of you must have a question.  I'll take questions first from the people who don't work for vendors.  And since we don't seem to have a roaming mic I'll summarize the questions.

return to article

Managing Rules Migration
[from the audience]   Given rules' slightly blended perspective between being 'data' and being 'code' how do the tool vendors (and their tools) see the managing of migration — from development, to test, and so on?

James

We'll start with Mike.

Mike

In general, one of the things that we (as vendors) tend to do to make the technology more appealing is to promote that we can give a tool to anybody — whether they are in IT or business — and expect them to make a change and just push it right into production.  That type of thinking sounds nice — it sounds ideal — but it has significant ramifications, from an auditing standpoint, from a financial standpoint, from a security standpoint (and so on). 

In actuality, in the types of engagements where our technologies are being used for the full promotion of a set of rules, our decision services do follow the typical, traditional IT flow; they do go from development, to staging, to some type of QA environment, and then out to production.  And all along the way we are continuously testing.  So, I don't want to leave you thinking that that part changes all that much. 

Our tools are there to support the migration as it goes from place to place; we have our full regression testing along the way.  At the same time, from a versioning standpoint, we allow our developers to be focusing on the next version, while the current change is going through its gyrations. 

So, in general, I don't see it as significantly different than what we would have for any other type of software project.  It's just the fact that we've externalized our logic, which makes it much more readily changeable — more quickly, more easily changed, without having to worry about procedural-type coding environments.

James

Jean, what would you say?

Jean

That's very true — as much as we push for agile, we know that people have their own SDLC and their own standards for versioning.  So, you need to consider the platform ... what versioning is supported? 

As for testing, I think each vendor has a way to do automated testing.  We call that "decision validation services"; we can compare scenarios and versions with KPIs.  So, there's nothing really all that different.  When you talk about "decision services" it's actually easier to test than if you talk about a BPM application where the user comes into play.

Also, we have something we call a 'decision warehouse'.  We keep track of anything that the engine decides.  But I see very few people actually using that as well as they could — for example, applying analytics on that to say, "Well, what is working?  What is not working?"  This is a loop between the decision made by the engine and the impact on the business, and how you bring that together — something that Jim discusses in his new book.  So, we have more instruments/tools than we had when we were doing C++ or COBOL or Java, but we aren't always taking advantage of them.

James

So, okay, you guys.  Do you want to agree or disagree?

Rik

I think there are a lot of tools available to you because we capture so much more metadata about the intent of what the logic is to do.  I know one customer (quite a few of the customers, in fact) really like the ability to compare one rule set to another set of rules and see what the differences are.  When something rolls into production they know exactly the logic that changed.  That's a little bit easier than comparing a bunch of VB code (or C# or COBOL).

James

Right.

Tom

We have the same thing, but we do a visual comparison.

Also, different organizations have different needs — in particular, financial organizations make rule changes because of the implementation of new rules (and so forth), and they know what time boundary these have to occur on so some of our approaches include workflow to elevate the approval of rule set changes through management.  Regardless, we always maintain basic gatekeepers.  As I said earlier, test management is very important — so, when someone makes a change one of our gatekeepers makes certain that all of the approved test sets go through. 

But this does vary depending on organizational needs.  We do have some organizations that, in certain circumstances, allow things to go straight through into production.  For that you need very robust version control with configuration management on the back end — this enables you to know where all relationships to that decision service are and to ensure that the change is being deployed to the right place.

Setrag

This is actually a very interesting question.  One of the things that we have seen is the need for a Center of Excellence.  A Center of Excellence has a maturity level; there are different models of "Center of Excellence." 

You need to start with an iterative methodology.  One of the things that we've done is build project management frameworks and test management frameworks within the tool itself so that you can create specialization layers — you can pilot them; you can deploy them.  It's a complete solution, with business processes, decisioning, etc. 

Then, as you mature in your methodology, you move to line-of-business Centers of Excellence and on federated Centers of Excellence.  That is key to the success of this agility work for business and IT.

Tom

But don't you think that the idea of a rules-specific Center of Excellence is getting a bit dated?

Setrag

Rules-specific?  Yes.  Not business application-specific.

Tom

I agree with that.

Setrag

That's what I'm talking about.

Tom

It should be something like a "business modeling" Center of Excellence — it should incorporate both process and rules ... because with rules Centers of Excellence you get into methodology wars.  We've already been through this in Data Warehousing and Data Modeling.  So, here we are, umpteen years later, doing this again when we split out rules and process that way ... who gets to be the judge?

Setrag

Since you're asking, the focus we have is business applications.  Business applications not only contain processes; they contain integration to legacy systems; they contain UI; they contain case management. 

That is the Center of Excellence and methodology that I'm talking about.  And you're absolutely right — it cannot be just rules; it has to be unified and governed together.

James

I'm going to let Jean speak in a moment....  but first I have a somewhat contrarian view on Centers of Excellence.  This is one of the big debates I have on the analytics front with folks:  "Well, should you centralize all your resources to do analytics or to do process or to do rules?"  To that I always say:  Well, I think centralized Centers of Excellence are a consequence of broad success, and they are highly correlated with broad success.  But I don't think they cause broad success.

I think companies sometimes get over-focused by saying, "I'm going to create a Center of Excellence, and that will help me be more successful."  I think actually the reverse is often true — being more successful will force you to create a Center of Excellence ... it will motivate you to do so, rather than necessarily providing you with some magic lever.

Tom

In theory, analytics is a model of the real world, right?  Decision is what I do with the model.  So, there can be a fundamental conflict of interest between the model of the real world and its accuracy vs. the decisions that I make.  You want to have an objective analysis of the model of the real world — for example, when you're doing quant analysis for probability of default in mortgages.  Recall that we had these agencies who said that the CDOs were triple-A rated, and now they're not.  The point I'm making is that, yes, analytics is a somewhat different bird than modeling software in pictures.

James

That's an interesting point of view.  So, Jean, you had something you were dying to say.

Jean

When you talk about excellence, it's difficult to excel in everything.  Yes, the company has to have a global vision of why we do services this way, how we bring decision services and BPM together, and how we bridge to analytics.  There does need to be a body at this level. 

But as much as we bring decision services and BPM together it's still the typical case that, operationally, rules change more often than the process.  How often are you going to reorganize a cost center?  But the rules in that cost center ... they can change every day.  So, you want to have specialists for each of these two domains.

James

Right.  Another question from the floor?

return to article

Separate Support for Business Knowledge Development
[from the audience]   Would it help us to separate the tools that are useful for business knowledge development from business knowledge execution?

James

That's an interesting question.  So, I'm not going to say who should go first — just arm wrestle (or something).  <long pause>

Jean

We need a decision service to make this decision. <laughter>

Rik

Well, I think there's always an opportunity as a BRMS vendor to walk the chain up a little bit — to capture more of that knowledge — but I also think you can go too far.  There's a wall that you start to run into where it just doesn't make sense to capture a lot of that other, higher-level knowledge management with the execution tool.  As important as the knowledge capture is, we're focused on execution.  I think you can go a little too far and get caught up in that aspect and not quite provide the value proposition that your tooling is trying to offer.

Jean

I'm going to take an example from the process management side.  We have a platform that's called "Blueworks Live" where we brainstorm.  That's really where we address knowledge engineering/knowledge management.  We brainstorm; we capture many artifacts.  This quarter we are starting to get into that decision space as well.  And from there we will transfer this knowledge into the execution platform. 

With capturing decisions the trick is that you don't want to go too far without implementing.  As I mentioned earlier if you do something like six month's of knowledge engineering, guess what, it's going to be out-dated when you finally get around to the implementation.  These are different facets of the platform; not everything is usable by everybody.

Tom

It's critical to have a snapshot of the existential directives behind the strategy that created your business model.  But that's not the objective of most of our tools.

James

All right.  Here's a next question.

return to article

Versioning
[from the audience]   What do you see happening in versioning?  What do you think a robust versioning system for a rules platform looks like?  What are you doing to improve the versioning of rules?

James

Setrag, why don't you start.

Setrag

For us, versioning is built in.  Everything you do you create from generalized rule set layers to more specialized, with built in versioning.  So, in our case you don't use a third-party versioning tool.  You can navigate the versions and you can audit who has changed what; you can create pilots and you can deploy.  The key point is that versioning of the rules is built into the tool itself. 

Also, we have multi-dimensional organization of the rules:  temporal versions, hierarchical versions ... whatever organization you have, even versioning including things like your jurisdictions.  So, when you create a new rule as a version of an existing rule that meta-information is captured in the enterprise repository.  You can navigate it; you can govern it; you can deploy it.  We find it extremely important that the engine itself supports robust, multi-dimensional versioning.

Mike

I would say that from a versioning standpoint we are pretty robust.  If you have a look at what our customers call out as deficiencies across the products we have, versioning doesn't really come up as one of those key things that is causing people pain. 

We have a repository; we don't necessarily rely on an external source control management tool, but we can leverage those if that's something that is important to you.  We can version with specific version numbers — we can version based on when you decide to do things; we can version based on metadata (as Setrag was talking about); we can call out snapshots and move those snapshots through a testing phase (and so on). 

I don't really have a ton to add, but it's not something we see as a key problem.  Maybe it's the case that all these vendors here have found a way to solve that problem very elegantly.

Tom

Some of our banking customers in Europe have very detailed, very low-level requirements to have fine granular controls.  So, that's the versioning we're working on in our products.

However, at the more general level, the question for the organization becomes, "What is motivating a new version of the business rule?"  Is there a new version because you changed a data range or made some small change, or has the rule changed because the organization made a recognized change in their strategy for implementing some type of business practice?

In other words, there are two different things going on there — at the high level this is a new practice; at the lower, detailed level this is simply a correction or a minor adjustment.  You have to clarify that in your organization and then decide how you are going to deal with each from a governance standpoint.

Rik

I think we've spent a fair amount of time working through a number of features that support the versioning process.  So, without knowing the specific instance of what's frustrating you, I'd have to say that we've solved the problem.  <laughter>

Jean

Without the specifics, as well, our thought is to have this built in to the decision center.  So, all the tools talk to the same decision center where we implement the governance.  This includes things that are web-based and Office-based and Eclipse-based — everything.  And then, in the backend, if you want to do your own versioning we can move that out to your versioning system.

So, we have it both ways — built in and open — which is very important for the large, premier enterprise play.  You cannot be enterprise if we come in with a point solution that doesn't fit. 

The thing I would like to add is that from a technical standpoint you can do everything, but operationally you have to set up the governance and the project management.  And then you have to tie that to the KPIs — because for a decision service you want to have just a few elements to compare, one version to another.  You don't want to have to go through all the rules.  So, you need to have very business-centric KPIs that you want to compare, version-to-version.  Without some planning you will find that you have quite a few when you are working at some scale and you keep everything so that it can be referenced for ten years.

James

All right.  I'll take a question from over there.

return to article

Legacy Modernization
[from the audience]   What do you have to say for legacy modernization?  What do you have for rule extraction from legacy systems?

Jean

We encounter a lot of legacy code at IBM, both within IBM and outside.  We have a tool called Rational Asset Analyzer, which does scans.  It works for COBOL (assuming that's the code you're looking at). 

The general idea is that it's also a methodology approach.  We can scan the code and identify events that impact the business logic (which is within the code).  This gives us a way to extract these elements and import them into our tool so that they can be transformed into rules. 

But if you do that, the thing that you don't do is the rule modeling — you don't think of "how would I have done that?"  Rule modeling wasn't done thirty years ago so it has to happen now.

I have another point I want to make on that.  In working with our GBS organization on large projects I was very surprised — I had thought that these old 'black box' applications were mostly business logic.  But, no, it turns out it's largely UI.  So, when you are doing this kind of 'rule mining' you have to really make all the data transformation, all the things that are not really business logic, and that's the tricky side.  Automatic scanning does not do a lot to help with that.

Setrag

Let me give a pragmatic example.  We have a predictive analytics engine that goes out and looks at data.  There are rules hidden — business insight hidden — in data.  So, we extract the models and then we bring the models, in a unified platform, to (for instance) customer service representatives who are executing the processes.  They then can make next best action offers to the customers on the call, so that they don't churn.  In the context of processes and cases, they can use the logic that you have extracted from data using predictive analytics.

James

Anybody else?

Tom

Like many of the vendors, there's a process discovery we can do with our process tools to go through the data schema and discover the flow of data from point to point.  But the challenge in all this is that different practices in COBOL end up in different forms ... some of your data is in physical rows in the table; some of it is hard-coded.  (COBOL has lots of hard-coded things in it.)  So, you have to look at what the practice of the organization was in terms of how they implemented their COBOL before you can come up with a decent solution. 

And even then some people would say, "Decide whether you want to dispose of it altogether."  Put metrics on the effort — there are many profiling tools that are available, but there's no magic bullet that's going to convert your code into all kinds of pretty rules and modern process definitions.  Most of the time it's making a pig out of sausage.

James

I think that's a really important point.  My observation would be that it's easier to use what you get out of legacy system tools as a way to validate or tweak or correct the rule modeling that you are doing rather than as a way to get your initial rule model.  This exercise tends to come up with very very low-level, technical artifacts, not all of which you want to implement in your business rules management system. 

However, sometimes that's the only source you have, and then it's okay to use it to find out what you did.

Rik

We think it's a good way to get started but you still need a lot of human involvement.  There's just too much there that's unstructured, without context.  You have to let the humans provide that context.

James

Okay.  So we have a question ... there.

return to article

Why Another Repository
[from the audience]   So, I already have to manage the versioning of the database, which is done separately from the versioning of the code, which is done separately from (now) the rules.  Why would I want another solution in this mix?  Why should I add a rule repository to all the other places where I have bits of my system?  Why can't I just use one of the existing bits of my system?

Tom

Because in certain instances — particularly in credit risk management and market abuse — it will be impossible for you to have the agility to implement the solutions that we have already provided. 

I will grant you that in some of the slower moving things that you describe, yes, we don't always get our repository in there.  But if your need is to have a rapid, mature, governed deployment of very complex rules into a high-performance environment, you're going to have to look at very focused solutions, such as what we're describing.

Audience

So, what you're saying is that a generalized repository just isn't fast enough to get the code into and out of?

James

It's more that the pace of change in the rules is far greater than is assumed in most source code control systems or database systems.  We've all worked with projects where you're literally changing hundreds of rules every day, right?  If you don't have a special way to version those then you're not going to make it happen.

Does anybody else on the panel want to add something to that?

Jean

Rik used the word 'metadata'.  It's not just one more repository.  The business rules are a first-class citizen.  They are an enterprise asset.  So, if you don't get this specialized repository then you cannot apply procedures that are specific and add value for that.

Rik

We do have clients who choose to use our repository only in the test environment and then, once the rules are set there, they move that set of rules, as a file, through the migration process, just like they move everything else through the process.  That way they can keep database changes, UI changes, and logic changes aligned.

Mike

From a repository standpoint there's a lot more capability than just version control that we're after.  Version control is a by-product of having this repository with all the metadata, with all the linking, with all the capabilities to be able to give the confidence that the decisions that I'm about to defer to a computer now are going to do what I ask them to do and to have full confidence in them.  That's really what the repository is doing for us.

Setrag

There are some capabilities that you highlighted that are important — like how versions are organized, how you're branching, how you're overriding, how you're impacting — these need to be core.  So, yes, these will be in your rule repository, which (by the way) is not just a rule repository.  It also contains the processes, the UI, the integration.

James

We have a question back there that I want to get to.

return to article

Losing Focus
[from the audience]   The observation, from a number of the vendors, was that it's possible for a business rules system vendor to go too far up and to cease to add value.  In this question, you're challenging that, and you would like them to explain their position on that.

James

So, those of you who said that, why did you say it?  Jean, Setrag, and Rik — you were the three.  So, Rik, we'll start with you.

Rik

For me it's a matter of focus.  Having a good SDK that we can link to a requirements tool that is higher up in the value chain is really the way we look at solving that problem.  If someone wants to make the bridge between their system and our system, using our SDK (and we've done that with a couple of different tools), we're happy to do it.  It's just that we can't, as a vendor the size we are, be focused on delivering business rules management technology and try to stay in that space.  Yes, we will be adding aspects of that, over time.  But we can't do it all, all at one time.

James

Does anyone have a different perspective?

Tom

Well, there's no consistent definition or practice of knowledge management.  It varies from organization to organization.  On the other hand, there is a more consistent definition of these detailed programming applications that we're writing. 

There was an attempt at that in the Business Motivation Model, with the OMG.  That's pretty good, but it still doesn't go quite far enough to define what the knowledge is and how it should be applied.

Jean

We use the term "not too far" — that's from a timing standpoint.  So, if you go six months without delivering anything to the business then you are missing something.  You are missing a value proposition of the business rules approach.  It doesn't mean that we don't have teams on divergent paths ... one team doing enterprise modeling, another doing business modeling, another team implementing.  But you have to work together.  You can't go for six months doing just modeling.

Setrag

I want to go back to what Tom said and to the comment you gave.  It's interesting the perspective that you have, and Tom is saying that there are different definitions of knowledge.  In fact, let's use this term 'knowledge worker' ... the cognitive worker who is coming up with the policies and procedures. 

If you're saying that we need to capture their knowledge directly in the tool, you're absolutely right; that's what this kind of tool — automation of the business rules and the business process — does.  Exactly! 

The problem is that knowledge sometimes goes into esoteric documents (etc.).  And that's not what we're talking about.  But if it's about capturing the policies and procedures and the requirements?  Absolutely, YES.

return to article

Looking Ahead Twelve Months
[James Taylor]   We're almost out of time.  So, for a final question, I'm going to ask each of my five panelists — Setrag, Tom, Rik, Mike, and Jean — to say, "What is the one thing you see coming down the pike in the next twelve months that really excites you about your product?" (something that you're willing to tell the audience)  And, quickly, because we are almost out of time.

Setrag

It's this unified agility platform that combines decisioning, including predictive (and one term I didn't use here yet is 'adaptive') adaptive rules ... rules that are self-learning and self-reorganizing in terms of next best actions — all in the context of dynamic cases.  We see that coming and being extremely powerful in terms of aggregating knowledge, knowledge-assisted worker, as well as more clerical transactional work.

Tom

Internet of Things is very big for Bosch and our software product suite.  We're going to get involved in a lot of non-traditional software things, such as the direct current micro grid, solar energy, alternative sources of energy, and the connected electric vehicle.  We're using these techniques that we've developed over the last ten years for rules and process modeling to achieve that.

Rik

Static analysis of your rule base, for better recommendations on representing your rules in the right artifact.

Mike

We're seeing a unified execution environment for your rules, for your complex event processing scenarios, and also for your business processes — getting that common language across the various ways of implementing knowledge management.  A second part of this is to get decision authoring and a commoditization of decision services out there, in the field, to make it deliberate rules a lot more than they are.

Jean

I'll use another keyword:  'the enterprise play'.  We need to go beyond that one first project.  Everybody now has trust in the fact that the platform can deliver.  We have the convergence between BPM and decision management and there's some confidence there today.  Looking forward, on our side, decision management means business events and business rules and business analytics.  So, that's where I see things moving ... moving up the chain.

return to article

Wrap-up
[James Taylor]   All right.  I want to thank my panelists here, all of whom (I'm sure) will be in the Expo later.  So, if you have more questions you can ask them there.  That's it.  Thank you all very much.  <applause>

# # #

Standard citation for this article:


citations icon
Business Rules & Decisions Forum, "Business Rules Forum 2011: Business Rules Vendor Panel" Business Rules Journal Vol. 13, No. 2, (Feb. 2012)
URL: http://www.brcommunity.com/a2012/b635.html

About our Contributor:


Business Rules & Decisions Forum
Business Rules & Decisions Forum

Business Rules & Decisions Forum offers a unique opportunity to hear from real-world practitioners about what they have accomplished and how they did it. All the foremost thought-leaders in the field will be there. Find out how your organization can come to grips with rapid change, massive customization, and complex business logic in a truly scalable, traceable, manageable manner. Learn more at http://www.buildingbusinesscapability.com/brdf/

Read All Articles by Business Rules & Decisions Forum
Subscribe to the eBRJ Newsletter
In The Spotlight
 Ronald G. Ross
 Jim  Sinur

Online Interactive Training Series

In response to a great many requests, Business Rule Solutions now offers at-a-distance learning options. No travel, no backlogs, no hassles. Same great instructors, but with schedules, content and pricing designed to meet the special needs of busy professionals.