The Zachman Framework and Observations on Methodologies

John A.  Zachman
John A. Zachman Chief Executive Officer, Zachman International Read Author Bio || Read All Articles by John A. Zachman

The Zachman Framework was first documented in 1982, published in an internal IBM document in 1984 and initially published in the IBM Systems Journal in 1987.  All three of these articles described the first three Columns of the Framework (What, How, and Where) and made reference to the "other three Columns" (Who, When, and Why) in footnotes or in an Appendix because the industry state of the art at that time did not even acknowledge the existence of any formalisms for the "other three Columns."

The Systems Journal article of 1993 was the first widely-published description of the full six Column Framework (What, How, Where, Who, When, and Why) even though the state of the art still was limited to the first three Columns.  To this day, the information industry state of the art in general is still very limited with regard to the "other three Columns."  In fact, formalisms for Column 3, the Where Column, are not even widely employed at present.  My personal opinion is that the information industry state of the art in Rows 1 (Scope) and 2 (Models of the Business) of all Columns as well as all Rows of Columns 4, 5, and 6, (Who, When, and Why) --if there is any state of the art at all-- is still very limited.[1]

From the outset, the Framework for Enterprise Architecture --the "Zachman Framework"-- has been comprised of six Columns and five Rows, six Rows if you include the Functioning Enterprise as a Row.  It is in Row 6 (the Functioning Enterprise) where instance examples are classified.  Each Column is descriptive of a single variable elicited by the six primitive interrogatives, What, How, Where, Who, When, and Why.  Therefore, each Cell of each Column, with the exception of Row 1 (Scope), is made up of two descriptive components (or two "meta entities"):  the columnar variable (Thing, Process, Location, People, Time, and Motivation), and its relationship with itself, which gives internal structure to each Cell model.  That is, the entities of the Cell and their intra-entity relationships must be explicitly expressed on a peer-to-peer basis for a complete Enterprise Architecture Cell description.

The Row 1 (Scope) Cells have a single, aggregate meta entity and therefore the Cell models have no structure as they simply bound the Enterprise relative to the columnar variable.  Technically, the Row 1 Cells are not models.  They are simply lists, which is how I have portrayed them in the framework graphic from the very first draft.  Row 1 (Scope) Cells could contain things internal to the Enterprise or external to (that is, of interest to, or importance to, but not within the control of) the Enterprise.

The Row 5 (Out-of-Context) Cells, although comprised of both Cell meta entities (the columnar entity and the relationship entity), are not structured.  They are lists, also as I portrayed them from the first draft of the Framework graphic.  Although both entities are present in the Row 5 descriptions, they are expressed Out-of-Context (not structured) as lists.  Because they are made up of both meta entities, maybe a better word for their expression would be "listings"[2] to differentiate them from the lists of Row 1 (Scope).

In summary, Rows 2 (Models of the Business), 3 (Models of the Systems), and 4 (Technology Models) Cells are structured models with both the columnar entity and the intra-entity relationships explicit in the descriptive models.  Row 1 (Scope) Cells contain a single entity and are lists.  Row 5 (Out-of-Context) Cells contain both the columnar entity and the relationship entity but are listings.

The Rows of the Framework constrain the models based on the audience for whom the descriptions are created:  the Models of the Business for the Owners of the Enterprise (Row 2), the Models of the Systems for the Designers of the Enterprise (Row 3), and the Models of the general Technologies for the Builders of the Enterprise (Row 4).  The Row 1 (Scope) Lists bound the analytical target relative to the interrogative.  The Row 5 (Out-of-Context) Listings are tool specific, employed in the transformation to implementation.

The classification by the six primitive interrogatives (What, How, Where, Who, When, and Why) has been around for thousands of years.  The classification by the audiences (Owner, Designer, Builder; or Requirements, Engineering, Manufacturing Engineering; or Conceptual, Logical, Physical ... by whatever names are elected by various practicing disciplines) bordered by a Scoping for bounding and an Out-of-Context specification for transition to implementation have been around for hundreds of years, if not thousands of years, as well.

The classifications for both axes of the Framework have been around for hundreds or thousands of years, and I am confident they will be around for hundreds or thousands more years.  They are not going to change, by whatever name they are called.

All of the above constitutes the Zachman "Normative" Framework, and it has never changed nor will it ever change.  Maybe some of the names will change based on common usage or maybe we will learn better words to more accurately express the Framework schema or its components, but the Zachman Normative Framework has not and will not change.

When I was interpreting the Zachman Normative Framework relative to an Enterprise, which was a fairly radical idea 25 years ago, I was careful to name each intersection between an interrogative and an audience, that is, each Framework Cell, and show a sample graphical icon with an "e.g." ("for example").  I did not want the Framework to get confused with methodologies or graphic notations.  There may be many different methodologies for populating Cells and many different graphical ways to express the models of the two meta entities of any one Cell.  For example, Column 1, Row 3, the Logical Data Model there was Chen, Bachman, Finkelstein (Martin), Barker (Oracle), Appleton and Brown (IDEF1X), and others.  For Column 1, Row 4, the Physical Data Model there was Hierarchical, Network, Relational, Star, Snowflake, Multi-Dimensional, and others.

Since, in the 1980's, there was no industry experience to suggest names and icons for the Enterprise Framework Cells of Columns 4, 5, and 6, (People, Time, and Motivation), I made up names and icons as best I could.  In fact, since there was no industry experience, I had to make up names of the Enterprise Cell meta entities as well as the Cells themselves, and based on the sheer logic of the Framework, the names I chose for the meta entities in almost every case have stood the test of time and common usage and have remained virtually unchanged.  The names of the meta entities that comprise each Enterprise Framework Cell are found at the bottom of each Cell in the Framework graphic.  Probably, as we learn more about Columns 3, 4, and 5 (Who, When, and Why) there will be some changes to the example model names as well as changes to the names of the meta entities of the Cells.

I have changed several of the Cell "e.g's" based on changes in the industry terminology.  For example, we did not have the words "Business Process" in the 1980's.  That only became popularized after 1992 when Mike Hammer and Jim Champy published the book "Reengineering the Corporation."[3]  Therefore, around 1995, I changed the name of Column 2, Row 2 from "e.g. Functional Flow Diagram" to "e.g. Business Process Model."  The model did not change.  Only the name changed to reflect common usage.

Similarly, I have changed several (not many) sample icons to be more consistent with the current industry thinking.  For example, I changed the Column 5, Row 2 'e.g. Master Schedule icon' from a P.E.R.T. chart-looking icon to a circular icon as used by Peter Senge in "The Fifth Discipline," which was widely read by general managers.

If there is one factor that makes the Framework a valuable analytical tool it is the fact that the classification logic of the Framework has not changed.  Six Columns of interrogatives.  Five Rows of perspectives, with the Functioning Enterprise as Row 6.  One meta entity in each Cell of Row 1, and two meta entities in each Cell of Rows 2 through 5, derived from the columnar interrogative and its relationship with itself.

Some additional names of some Enterprise Cell entities may change over time as industry experience matures, common industry terminology changes, or we learn how better to express and make clear the underlying meaning of the meta entities.  For example, I have entertained the idea of changing the word "Process" to "Transformation" because the word "Transformation" more accurately conveys the sense of the Column 2 Models.  You take something in, do something to it ("Transform" it) and send something different out the other side.  Although "Process" implies transformation, it is easy for the uninitiated to miss that implication and think incorrectly that a process decomposition is a Column 2, Row 2 Business Process model rather than an Input-Process-Output, Business Transformation Model.  I am reluctant to make such a change of name in this particular case because there are some Processes that technically, are not transFORMations but transPORTations.  That is, there are Processes that move things from one location to another that should not be excluded from the Column 2 models.  Therefore, for the time being, it is likely advisable to simply continue using the word 'Process.'

I am very cautious about changing anything relative to the Framework.  I have tried to rectify inconsistencies and conform to common linguistic usage and it is only very deliberately, with much consideration, and very gradually that I change anything.  I have tried to make the Framework graphic communicate as clearly as I can and, at the same time, not give any impression that the Framework itself has changed or that I would change anything about it indiscriminately.  Having said that, I presently am confronted with an issue in Column 5 in which people seem to get distracted from the essence of the models by the meta entities "Event" and "Cycle."  'Event' in Column 5 represents a point in time but gets confused with the happening that occurs at the point in time rather than the point in time itself.  Likewise 'Cycle' connotes a repetitive pattern of Events rather than the lengths of time or intervals between the Events as intended in the Columnar models.  This confusion is likely to cause me to find more accurately expressive words for the Column 5 Time meta entities.

In summary, the Enterprise Framework, the underlying meta entities, the Zachman Normative Framework that classifies the underlying meta entities has not changed.  The Framework has been complete since its inception.  There is nothing more to define relative to the Framework itself.

In addition to the basic entities within each Cell, there clearly is a potential relationship between all of the Cell meta entities across any one Row and between the meta entities of any one Cell and the Cell above it and the Cell below it.  That is, the Framework Cells are horizontally and vertically integrated.  Each meta entity in each Cell potentially could be related to any other meta entity in its Row and to either meta entity in the Cell above and to either meta entity in the Cell below.

The maximum number of potential meta entity, many-to-many relationships in the Framework could easily be calculated.  For example, the maximum number of many-to-many relationships between 2 meta entities in each of 6 Cells or 12 meta entities in a Row is 60 because the number of connections between "n" points is n squared minus n over 2 (12 squared is 144 minus 12 is 132 divided by 2 equals 66) less the 6 relationships internal in each cell or a total of 60.  For Rows 2, 3, 4, and 5 it is 4 times 60 or 240 plus 15 many-to-many relationships in Row 1 (6 squared is 36 minus 6 is 30 divided by 2 equals 15) or a total of 255 possible horizontal, many-to-many relationships.  If the horizontal relationships in Row 5 are relevant, they may well only be made explicit or retained in the product technologies of Row 5.

The maximum potential number of vertical many-to-many relationships (intersection entities) is 108.  There are four possible relationships between two entities of one Cell and the two entities of the cell below it.  Therefore, there are 16 possible relationships between Cells in Rows 2, 3, 4, and 5.  Because there is only one entity in the Cells of Row 1, there are only 2 possible relationships between the Row 1 entity and the two Row 2 entities or 2 between Rows 1 and 2 plus 16 between the remaining Rows or a total of 18 possible vertical relationships per column and 108 for all six Columns.

The maximum total intersection meta entities based on horizontal and vertical many-to-many relationships between all of the meta entities in the Framework would be 255 horizontal plus 108 vertical or 363 total.  Adding in the 54 basic meta entities in all of the Cells would bring the maximum possible, total number of meta entities including intersection entities in the meta model of all the Cells of the Framework to 417.

If you consider direction in the relationships, there would be 4 relationships for each of the 363 intersection entities or a total of 1,452 possible relationships.  The most robust possible metamodel of the Framework would contain a maximum of 417 meta entities with 1,452 relationships.  There is a minimum set of 54 primitive entities (12 for each of the Rows 2 - 5 and 6 for Row 1) in the Framework and a minimum set of horizontal relationships between meta entities in any one Row that could be derived from the logic of the Framework itself.  Any other intersection meta entities up to a maximum of 363 would be methodology dependent. If a methodology elaborates the Framework metamodel in any way, clearly the number could increase.  Whereas methodological extensions to the Framework metamodel could clearly add to the potential intersections, I am also sure that some of the basic 387 Framework intersection entities will go to null in practice.

I have given you all of this math to give you a sense of the power of the classification system to categorize and manage enormous complexity in an Enterprise.  In 2004, we are only scratching the surface in our understanding of how to exploit this capability and accommodate the ever-increasing complexity.[4]

The Framework determines the total set of meta entities and potential relationships.  It is a methodology that activates its relevant set of meta entities and their relationships and determines the notation for their expression based its syntactic conventions and on which Cells or which slivers of Cells the methodology chooses to make explicit.

For example, a relatively simple methodology, startlingly Framework-compliant for its time, IBM's Business Systems Planning methodology, to which I made substantial contributions in the early '70's, initially activated only four primitive meta entities:  Processes, Organizations, Systems, and Data Classes.  These basically relate to the Framework Row 1 meta entities Business Processes, Business Organizations, Business Entities, and the Row 6, Column 2 current Systems.

BSP defined non-directional, many-to-many relationships (matrices) between the four primitive meta entities:  Processes and Organizations, Organizations and Current Systems (Current Systems is a Row 6, Column 2 phenomenon, actually a part of the Functioning Enterprise), Current Systems and Data Classes, and Data Classes and Processes -- the "Iron Cross."  (When the axes of the four matrices were made common, vertically --Processes and Current Systems-- and horizontally --Data Classes and Organization-- it formed a cross.)  When BSP became a stand-alone methodology around 1972, many-to-many relationships were added between Business Strategies and Processes and Business Strategies and Organizations and other meta entities as appropriate to the specific Enterprise under analysis.  When IBM began marketing the first distributed systems around 1980, 'Location' was added as a variable in the form of "Logical Application Groups."  All of the composite, many-to-many intersections were created from primitives (although we wouldn't have used that terminology in those days) and were expressed as matrices which were held manually (on paper, not as intersection entities in a repository ... there were no repositories in those days).

By 1985 or so, around 3,000 BSP studies had been recorded but their incidence was dropping off for several reasons.  First, the studies took a long time ... and as our understanding of the complexity of the issues improved, we were inclined to add more matrix analyses (intersection entities) to the methodology up to the total possible of 15 in Row 1 (6 entities squared equals 36 minus 6 equals 30 divided by 2 equals 15), actually 21 matrices if you count the relationships with the current data bases (Column 1), current systems (Column 2), current hardware/software (Column 3), etc., etc. of Row 6.  The world (the market) was unwilling to spend the time required to do the analysis and to produce a well-defined Business Systems Plan.[5]  Furthermore, many of the BSP Consultants and Implementers never understood the underlying theory of the analysis and would simply follow the "cook book" and never discover what they were supposed to be looking for.  The principal theoretical deficiency of BSP was related to the Data Classes.

In the '70's there were no words to talk about the classification of data.  The words, "database," data model," "entity," "relationship," "attribute," "normalization," etc. did not exist, and the general practice was to classify data in terms of its usage, that is, as it was used, inputs/outputs of processes, "user views," not as normalized entities.  In my personal case, it was Wade Jones, an IBM Systems Engineer who was working with me on a BSP study at Lockheed Aircraft Corporation around 1976 that explained to me that if the Data Classes were not unique (non-redundant, that is, "normalized") the matrix analysis was meaningless.  If the data was classified by its usage, the Process versus Data Class matrix was actually a Process versus Process (Input/Output) matrix which was meaningless and would simply result in shifting the existing systems boundaries to some extent and then the study analysis simply re-ordered systems initiatives based on more management-oriented business priorities, far short of changing the basic concept in the Enterprise from one of systems implementations to one of Enterprise Architecture.[6]

Furthermore, in the early days, there was no understanding of Models of the Business (Row 2) and the importance of creating and maintaining the primitive, structured models:  the Semantic structure depicting the Enterprise bill-of-materials; the Business Process structure depicting the transformation of inputs into outputs; the Distribution structure depicting the Location capacities and linkages; the Work Allocation structure depicting organizational responsibilities and work product dependencies; the Event structure depicting the Enterprise dynamics or 'Systems Thinking'; and the Objectives structure depicting a coherent specification of the desired states of the other primitives.  In the early days, we did not understand the existence of the Row 2 Models of the Business, the Row 3 Models of the Systems, the Row 4 Technology Models -- in other words, we didn't understand the concept of Architecture, which explains the problems of making the transformation from the Strategy (Row 1 lists as produced in the BSP study) to the implemented Enterprise (Row 6).

Some Enterprises would spend officially four months (many times, in actuality, six months to a year) doing a BSP Study and end up with simply shifting the systems boundaries, re-ordering the systems initiatives based on management priorities and then more of the same, classic application development.  BSP, if not understood conceptually, took a long time to develop an Information Systems Strategy for building more stovepipes.  Nothing had changed in many cases except that management was involved and the systems priorities were more credible.

Actually, there was nothing wrong with BSP, particularly if one understood what they were looking for, got the data classified non-redundantly (by Thing or "Entity"), understood the limitations to the state of the art, got the data primitives separated from the process primitives, and defined the build sequence of the sub-systems to build the data creation sub-systems before building the data usage sub-systems.  The problem with BSP, if there was a problem beyond being too early for the world to appreciate or understand it, was, it became a classic "silver bullet."  The expectations were set completely incorrectly.  People tended to think that, after they finished doing a BSP study, they were done, rather than understanding that they were just beginning.  The result was a classic silver bullet conclusion, "BSP? Well, we tried that. That didn't work."

BSP, which is not dissimilar from today's state of the art Enterprise Architecture Planning methodologies[7], activated 4 (and later 5) out of a possible 6 meta entities of Row 1 and created 5 out of a possible 15 Row 1 intersection composites expressed as matrices ... actually, it created 6 out of a possible 21 matrices if you count the Row 6 Current Systems entity.

In total, BSP activated 4 (and later, 5) out of a total of 54 primitive meta entities and 6 out of a total of 363 composite intersection entities and BSP chose to express those composite intersections manually as matrices.  Therefore, only 6 relationships were defined out of a total 1,452 possible relationships since matrices are direction agnostic unless you attribute the row/column intersections with something like "create," "read," "update," and "delete," etc. which infer some direction.  In fact, those BSP Consultants that understood the significance of normalizing the Data Classes by Thing (entity) actually identified the Create Processes in the Process versus Data Class matrix because that is the way to define the information systems boundaries and implementation sequencing based on the dependency of the systems on the create points of the data.  You don't want to build systems that use the data before you build the systems that create the data.  If you build systems that use the data before you build systems that create the data, that is, if you build systems in the incorrect sequence, you will de facto denormalize the data, that is, you will, by definition, build "stovepipes."  Classifying data by Process rather than by Thing and prioritizing by some value parameter other than architectural dependency risks building systems that use data before building systems that create the data.  This is the source of redundancy and discontinuity ("stovepipes") and possibly, a major portion of Enterprise entropy,[8] not to mention management frustrations.

Experienced BSP consultants would also attribute other matrices as well; particularly prevalent were the Organization-related matrices (for example, defining the Deciding Organization, the Responsible Organization, the Supporting Organization, etc.) reflecting variations in the allocation of work responsibilities in the Organization versus Process matrix, the Organization versus Data Class matrix, the Organization versus Strategy matrix, etc.

In summary, it is the methodology that determines which meta entities and intersections to make explicit and how they are expressed (their notation) ... not the Framework.

Methodologies have been evolving, and I am sure will continue to evolve and mature forever.

Tom Bruce wrote a book on Column 1 methodologies.[9]

August Scheer wrote a book on Column 2 methodologies.[10]

Henry Mintzberg wrote a book on 12 methodologies for Column 6.[11]

Peter Senge wrote a book on Column 5 methodologies.[12]

Tom Demarco wrote a book on Column 2 methodologies.[13]

Bernie Boar wrote a book on Column 3 methodologies.[14]

Alec Sharp wrote a book on Column 4 methodologies.[15]

Peter Drucker has written an array of books that are fundamental for Row 1 methods.

Terry Halpin wrote a book on Column 1 methodologies.[16]

Steve McMenamin wrote a book on Column 2 methodologies.[17]

Ron Ross wrote a book on Column 6 methodologies.[18]

Etc., etc., etc.

This is just a sampling of a plethora of books on methodologies.  I am quite sure that few of these methodologies are "Framework-compliant" since, in most cases, they were developed long before the Framework became widely acknowledged as a context for methodology development and before there was any understanding of the trade-off between primitive models (architecture) and composite models (implementation).  It would only take some work to analyze any one of these or other methodologies to determine where Framework non-compliance exists and make suggestions for modifications to bring the methodology into theoretical consonance through Framework compliance.  The basic Framework compliance question is, are the implementation composite models being created from architectural primitive models and are the primitive models being retained as a basis for managing change?

If you overlay five well-known and respected, generally Framework-consistent methodologies on the Enterprise Framework graphic, graying out the Cells that each methodology populates including:

Doug Erickson's Enterprise Engineering implementation methodology,

Clive Finkelstein's Enterprise Engineering Strategic Planning and Portal methodology,

Sam Holcman's Enterprise Architecture Planning methodology,

Stan Locke's Proof of Concept implementation methodology,

Ron Ross' Business Rules implementation methodology,

you would find startlingly little overlap in the grayed out Cells, and the aggregate set of methodologies would cover only about 18 out of the 30 possible Cells of the Framework.  It would take a considerably greater amount more analysis to get definitive and accurate about exactly which meta entities, intersections, and graphic notations, and what Cell scope and level of detail is being made explicit by each of the above methodologies ... but once again, it is the methodology that determines which meta entities, intersections, and notations to employ, not the Framework.

I do not think it is realistic to expect there will ever be one single, simple, effortless, quick, painless, and inexpensive methodology that will, without any work, thought, or skill, activate every one of the 417 meta entities, 1,452 relationships, and specify and integrate 30 model notations, and effortlessly result in an Enterprise that is integrated, interoperable, aligned, reusable, flexible, reduced time-to-market, secure, seamless, and user-friendly.  The Enterprise is simply too complex and there are too many areas requiring Enterprise engineering specialization.  Such a "be all to end all" methodology would be the MOTHER OF ALL METHODOLOGIES.  I am confident that the only way an integrated, interoperable, aligned (etc., etc.) Enterprise will ever be achieved is by creating and managing the architectural primitives as defined by the Framework with those Enterprise engineering design objectives in mind, quite independently from the implementation methodologies being employed.

Having said that, I am sure that the state of the art will dramatically progress in the near term.  In fact, I am aware of some very exciting work being done that will automatically transform some selected higher Row primitive models into Row 6 implementations based on the Zachman Normative Framework rules, along with some clever pattern recognition and some specific conditions.  I am also confident that that work will be expanded and extended in time.[19]

Furthermore, I am confident that an integrative, normative Framework will be forever imperative to form the context for Enterprise Architecture method and tool integration.  Even further, if the method and tool metamodels were Framework compliant, it would make the Enterprise Architect's challenge of method and tool integration infinitely simpler.

I believe that I have been extremely careful to say the Framework for Enterprise Architecture is a 'Framework' (a schema, a classification structure), NOT A METHODOLOGY.  The Framework is totally independent of methods and tools.  That is what makes it a useful, analytical, integrative context.

The Framework is a context ... and there is some substantial evidence to suggest that it may be the ONLY context of primitive descriptive representations ... for formulating and managing an Enterprise Engineering and Manufacturing strategy.  Which slivers, of which primitive Cells, in what sequence, are you going to make explicit and manage in some repository to reduce Enterprise complexity and establish a baseline for managing change?  And then, as constrained by the industry state of the art, what methodologies and tools are required, or can be adapted and integrated, to build those primitive models, store those primitive models, manage those primitive models, change those primitive models, and assemble Enterprise implementations (composites) from those primitive Cell models?

The Framework has been complete from the beginning ... much methodology and tool development work remains to be done and will continually be done forever.  The Framework is a context within which the methodology work can be rationally and coherently done to transit from a "you start writing the code" Industrial Age era to the complex, dynamic Enterprise era of the Information Age.  Personally, I wouldn't get too creative and start elaborating the Framework metamodel, even if you maintain it Framework-compliant.  I would keep it as simple as possible.  I would make every effort to use the Framework as is and keep all relevant methodologies Framework-compliant as I started the migration into the Enterprise Architecture, Information Age.

It is Framework compliance in terms of the primitive models coupled with the employment of classic engineering design principles that determines a methodology's potential to realize the Enterprise engineering design objectives of integration, interoperability, alignment, reusability, flexibility, reduced time-to-market, security, seamlessness, and user-friendliness, etc., etc., etc.  Composite models are implementation models.  Primitive models are the raw material for doing engineering, Enterprise engineering, engineering for integration, interoperability, alignment, flexibility, and so on.  Methodologies that produce composite models are designed to do implementations.  Methodologies that produce primitive models are designed to do architecture.  Composite models should be being created from primitive models.  If composite models are being created and no primitive models exist, point-in-time solutions are being implemented, which, by definition, will be NOT integrated, NOT interoperable, NOT aligned, NOT flexible, NOT secure, and so on and so on ... "legacy."

Clearly, as I have said many times, the Framework is a schema, an application of classification theory, not a methodology.  In 2004, we don't have all the answers.  Much work remains to be done.  However, there is nothing that is keeping us from beginning to work with what we presently understand and from exploiting the logic of the Zachman Normative Framework to add to what we know and to continue advancing the industry state of the art for engineering and manufacturing Enterprises.

I am reserving the subject of classification versus implementation, frameworks versus methodologies, for another article.  This is a profoundly important subject that is deserving of an article of its own.

References

[1]  I do not intend to minimize the substantive work going on in any of the "other three columns," notably in Column 6, the Motivation Column.  The Business Rules Group, with whom I have had an on-going relationship for 15 or more years, has made significant contributions to the formalisms of some of the Column 6 models.  Their published work can be found at www.businessrulesgroup.org.  return to article

[2]  Listings is what we have called them from the very earliest days of the advent of computers.  return to article

[3]  "Reengineering the Corporation" by Michael Hammer and James Champy. Harper, 1993.  return to article

[4]  Some of the work that the Business Rules Group has been doing is attempting to address some of these issues. (www.businessrulesgroup.org)  return to article

[5]  Dewey Walker, the Director of Architecture for IBM, whose internally-employed analytical methodology of the late '60's evolved into BSP of the '70's and '80's, named the methodology Business Systems Planning NOT Information Systems Planning because it was addressing Business issues NOT I/S issues.  The confusion continues to this day as many people SAY Enterprise Architecture but actually THINK and DO information systems work.  It is easy to see in the context of the Framework.  If the work is being done in Columns 1, 2, and 3, and particularly at Rows 4 and 5, it is clearly I/S (or I/T or DP) work, not Business (or Enterprise) work.  return to article

[6]  This accounts for my STRONG emphasis on the Column 1 models being descriptive of the "THINGS" (i.e., Entities) of the Enterprise, NOT the DATA. Data is a Column 1, Row 3 Systems phenomenon, and Information ("user views") is a Column 2, Row 3 Systems phenomenon.  return to article

[7]  We had to change the name from BSP to ISP to EAP because of the silver bullet phenomenon.  return to article

[8]  The second law of thermodynamics.  return to article

[9]  "Designing Quality Databases with IDEF1X Information Models" by Thomas A. Bruce.  Dorset House, 1992.  return to article

[10]  "Business Process Engineering" by August-Wilhelm Scheer.  Springer-Verlag, 1989.  return to article

[11]  "Strategy Safari:  A Guide Through the Wilds of Strategic Management" by Mintzberg, Ahlstrand and Lampel.  Free Press, 1998.  return to article

[12]  "The Fifth Discipline:  The art and Practice of the Learning Organization" by Peter M. Senge.  Doubleday, 1990.  return to article

[13]  "Structured Analysis and System Specification" by Tom DeMarco.  Yourdon Press, 1978.  return to article

[14]  "Constructing Blueprints for Enterprise IT Architectures" by Bernard H. Boar.  John Wiley and Sons, 1998.  return to article

[15]  "Workflow Modeling" by Sharpe and McDermott.  Artech House, 2001.  return to article

[16]  "Information Modeling and Relational Databases" by T.A. Halpin.  Morgan Kaufmann, 2001.   return to article

[17]  "Essential Systems Analysis" by Stephen M. McMenamin & John F. Palmer.  Yourdon, Inc. 1984.  return to article

[18]  "The Business Rules Book" by Ronald G. Ross.  Database Research Group, 1997.  return to article

[19]  See Dr. Gary Simons' work on the Ethnologue at SIL International.  (www.Ethnologue.com)  return to article

Copyright 2004. Zachman International

# # #

Standard citation for this article:


citations icon
John A. Zachman , "The Zachman Framework and Observations on Methodologies" Business Rules Journal Vol. 5, No. 11, (Nov. 2004)
URL: http://www.brcommunity.com/a2004/b206.html

About our Contributor:


John  A. Zachman
John A. Zachman Chief Executive Officer, Zachman International

John A. Zachman is the originator of the "Framework for Enterprise Architecture" (The Zachman Framework™) which has received broad acceptance around the world as an integrative framework, an ontology for descriptive representations for Enterprises.

Mr. Zachman is not only known for this work on Enterprise Architecture, but is also known for his early contributions to IBM's Information Strategy methodology (Business Systems Planning) as well as to their Executive team planning techniques (Intensive Planning). He served IBM for 26 years, retiring in 1990 to devote his life to the science of Enterprise Architecture.

Mr. Zachman is the Founder and Chairman of his own education and consulting business, Zachman International®. He is also Founder of the Zachman Institute™, a nonprofit organization devoted to leveraging Zachman International's vast network of professionals and resources to offer services to small businesses and nonprofit organizations as they prepare for and experience growth.

Mr. Zachman serves on the Executive Council for Information Management and Technology (ECIMT) of the United States Government Accountability Office (GAO) and on the Advisory Board of the Data Administration Management Association International (DAMAI) from whom he was awarded the 2002 Lifetime Achievement Award. He was awarded the 2009 Enterprise Architecture Professional Lifetime Achievement Award from the Center for Advancement of the Enterprise Architecture Profession as well as the 2004 Oakland University, Applied Technology in Business (ATIB), Award for IS Excellence and Innovation.

Mr. Zachman has been focusing on Enterprise Architecture since 1970 and has written extensively on the subject. He has facilitated innumerable executive team planning sessions. He travels nationally and internationally, teaching and consulting, and is a popular conference speaker, known for his motivating messages on Enterprise Architecture issues. He has spoken to many thousands of enterprise managers and information professionals on every continent.

In addition to his professional activities, Mr. Zachman serves on the Elder Council of the Church on the Way (First Foursquare Church of Van Nuys, California), the Board of Directors of Living Way Ministries, a radio and television ministry of the Church on the Way, the President's Cabinet of The King's University, the Board of Directors of the Los Angeles Citywide Children's Christian Choir, the Board of Directors of Heavenworks, an international ministry to the French speaking world and on the Board of Directors of Native Hope International, a Los Angeles based ministry to the Native American people.

Prior to joining IBM, Mr. Zachman served as a line officer in the United States Navy and is a retired Commander in the U. S. Naval Reserve. He chaired a panel on "Planning, Development and Maintenance Tools and Methods Integration" for the U. S. National Institute of Standards and Technology. He holds a degree in Chemistry from Northwestern University, has taught at Tufts University, has served on the Board of Councilors for the School of Library and Information Management at the University of Southern California, as a Special Advisor to the School of Library and Information Management at Emporia State University, on the Advisory Council to the School of Library and Information Management at Dominican University and on the Advisory Board for the Data Resource Management Program at the University of Washington. He has been a Fellow for the College of Business Administration of the University of North Texas and currently is listed in Cambridge Who's Who.

Read All Articles by John A. Zachman

Online Interactive Training Series

In response to a great many requests, Business Rule Solutions now offers at-a-distance learning options. No travel, no backlogs, no hassles. Same great instructors, but with schedules, content and pricing designed to meet the special needs of busy professionals.