Experiences in Re-Inventing a Business Process with Business Rules (Part 4): Test & Retest
Last time I discussed how we did the documentation and coding of our system — to build an online credit system to approve online b2b transactions. The next step was to test and retest.
Test & Retest
Testing (and retesting) our system entailed these tasks:
- Embed audit trails and reason codes everywhere and anywhere — to track inputs, intermediate results, and outputs.
- Code — Test — Recode — Test (repeat until sufficiently satisfied).
- Utilize any tools (such as Excel, Access, and SPSS) to perform bulk testing.
At times, testing can become a blame game. To avoid potential conflicts it's pertinent for the developer to do some magic, inspecting inputs (data) coming in to ensure that it adheres to specifications. For our specific project, we had an internal data hub that inspected all inputs, looking for anomalies to trigger an invalid data code. Invalid data is a serious problem that needs to be resolved quickly, particularly if it traces to an outside data source.
The worst thing that can happen to any project is to have unforgivable errors that lead to losses. The best counter to this is to utilize reason codes that are audit trails embedded into (the majority of) rules. A reason code verifies if a specific rule has been triggered and provides messages regarding the root cause. Without a reason code, deciphering the exact reason that a request was declined or approved would take a long time. The majority of Business Rules Engine (BRE) applications have the ability to insert breakpoints and to display intermediate results as the BRE steps through each rule, up until the breakpoint.
A static database is critical for developing test cases and tracking results for a large complex project. The static database can be manipulated to trigger rules based on severity and to ensure that the hard cases are always tested. An important capability of a static database is that it allows the reuse of validated test cases as a baseline case to compare against any new repository versions.
Testing tools are an open field — the team used whatever tools were available, including SQL, Excel, and SPSS. Excel was by far the most comfortable for the business, due to its ability to sort categories, sum, average, and count results. SPSS, which is a widely-used statistical application, is by far the most reliable and flexible tool for running frequency counts and low-level scripts to segment data and slice-and-dice any way imaginable. For large test samples that are geared for more complex projects, SPSS is the tool to use; other (smaller) projects should use Excel.
We analyzed results from the static database and looked at the distribution of reason codes to isolate any cases that stood out as potential bugs. Specific data was exported to Excel and used to create simple charts, such as a line graph to pinpoint any extreme numbers for follow-up investigation. If you set maximum or minimum thresholds, you can graph the data to show actual results to see if any cases breached the thresholds.
The last step — the step after testing and retesting — was to monitor and adjust. This will be covered next time, in the concluding instalment.
 Dencie Mascarenas, "Experiences in Re-Inventing a Business Process with Business Rules (Part 3): Document & Code," Business Rules Journal, Vol. 12, No. 2 (Feb. 2011), URL: http://www.BRCommunity.com/a2011/b581.html
# # #