Professionalism Certification And Fallacies Around Best Practice

Professionalism, Certification and Fallacies Around “Best Practice”

Don’t equate Know-That with Know-How

Reading a PRINCE2 manual does not a project-manager make, nor reading the TOGAF manual an Architect. Know-How comes from doing – from experience and practice.”

So I wrote in my “Enterprise Architecture’s Three Worlds” article last year. The point was to emphasise the difference between Know-How knowledge and Know-That and to stress the importance of Know-How – which underpins personal competence and organisational capabilities.

Recently, Ian Bailey wrote a somewhat controversial and contentious blog entry, “There’s Madness in Your Method” on the Integrated EA website, citing Martin Fowler’s observation of a lack of correlation between personal competence (in software programming) and software certification (by which I think he means certification in software development methods).


Featured Course



Fowler does not provide any detail on his research about how he evaluated personal competence, or how big his sample size was or how it was selected, and therefore how he was able to show that there is no correlation with certification. So there is little evidence on which to found a belief that certification in software development is “useless”, as he puts it – other than his personal anecdote seemingly based on his personal experience [which, frankly, is not good enough – I’d want a much more academically rigorous study before coming to such a conclusion].

However, given the distinction between Tacit Know-How and Explicit Know-That – which I think has been rigorously established in research such as Nonaka and Takeuchi’s – it is a very plausible hypothesis and may be a correct conclusion – and it is a view with which I have a good deal of sympathy.

There is a good deal of evidence that most software development projects “go wrong” in some way – such as the Standish reports – which does suggest there is a general problem with competence in the software development trade.

Let me be clear: it is the absence of methodology that is the hallmark of the amateur, the unprofessional (or non-professional) and the incompetent – in all fields.

Other forms of engineering have much better records – after they have developed recognised, proven methods. Fowler may also be right about such certification schemes being “prone to corruption” and that there is “the reality “ of an “entry gate”, implemented by dysfunctional HR departments in their recruitment processes, in which “competent people often need a useless certification in order get an interview.”

Ian Bailey’s blog, however, extrapolates and generalises this proper scepticism about the value of certification into a general attack on “methodology”. In this, I think, Bailey is simply completely and utterly dead wrong.

Let me be clear: it is the absence of methodology that is the hallmark of the amateur, the unprofessional (or non-professional) and the incompetent – in all fields.

The absence of methodology in any activity is a sure sign of people who quite literally ‘do not know what they are doing’ (or why they are doing what they are doing) – either as individuals or as a group. The absence of methodology is the active signifier of a dearth or deficit of Know-How in any type of enterprise or at any level in an organisation.

I’ll explain why I hold this view in a moment, but first I just want to observe a notable exception: there are places at the forefronts of science and technology, including management and social sciences, where literally nobody knows what they are doing – where everything is an experiment or doing something for the first time by anyone anywhere.

But these places are very rare – and in most roles in most enterprises, there is a body of established knowledge, including Know-How, upon which the professional can and should draw. Most of the time what people are doing in enterprises is not new and is not different in any fundamental sense – and there are people who have done it before, and written down how to do it – or at least what they did and what the results were.

Having emphasised the importance of prior knowledge, I’d like to pick up on some of Dr. Bailey’s comments. “… methodologies suck. Big time. They all look logical enough, and they’re hard to pick fault with because they’re based on best practice and the collected wisdom of experts in their field.

Methodologies are hard to pick fault with because they represent the codified knowledge distilled from thousands or tens-of-thousands of man-years of experience and learning – and written down by “experts in their field”. [Incidentally, the MODAF, with whose development Bailey has been very much involved, as an expert in the field, is also a methodology. So presumably his assessment of methodologies applies to his own work!? Or is MODAF an exception to the general rule that “methodologies suck”?]

However, the very notion of “best practice” is not beyond criticism.

Newell et al., [Professors of Management, Organisational Behaviour and Information Systems at Warwick, London and Bentley (Massachussettes) Universities – and experts in their field] write, in a section of their book, “Managing Knowledge Work and Innovation” [1] entitled “The Fallacy of Best Practice Knowledge” identify a number of ‘limitations’ to the best practice view.

Of these limitations, “the myth that ‘best practice’ can be defined independently of the specific context”, is arguably the most damning.

James Harrington agrees, in a paper entitled “The Fallacy of Universal Best Practices”, on the basis of his research into corporate culture [2].

James concludes, “One thing we can say for sure is that there is no hypothetical universal best practice combination that is applicable to all organisations striving to improve. The differences in the personality of their key executives, their customers, competitors, and products require that different management practices be deployed to optimise the organisation’s overall performance. Unfortunately there is no one right answer for all organisations”.

The key point is that every enterprise – or context for improvement – has features that make it unique, and so there can be no very prescriptive practice that is universally applicable in every enterprise context and which will always produce good results.

So questions like “What is the best EA Framework?”, in the abstract, are intrinsically meaningless. Best for what in what context?

There is no “best framework” in the abstract – despite what sellers of particular frameworks or certifications would have you believe. Even the ITIL IT Management framework has dropped “best practice” in favour of “good practice”.

However, I think there is an even more damning criticism of the notion of exogenously imposed practice: it is based on the Mechanistic-Tayloristic view of people, organisations and enterprises. It assumes people are an undifferentiated commodity – without aptitudes, interests, skills, knowledge, Know-How, abilities, intelligence, judgment, creativity, professionalism or imagination, and, yes, personal competence – who can be made to perform any role in an enterprise by being given a computer-programme-like list of instructions about what activities to undertake and how; who can have any level of competence in any capability just by “reading the manual”.

This is the model of people in enterprises that Bailey has in mind when he talks of “Zombies at typewriters…” and “…following a methodology.”

Real, professional people are simply not like that. The assumption is not valid – machines are like that. If there are people acting like Zombies occupying such mechanistically-defined roles in enterprises designed like they were clockwork mechanisms, then they can be (and should be) replaced with machines, including software-based machines. But most roles and most people are not at all that machine-like. There are limits to software automation that mean it cannot replace the judgment, adaptability and creativity of human brains.

This might seem to be anti-methodology – and supportive of Ian Bailey’s view that “methodologies suck” – but it is not.

Being critically reflective means thinking about the assumptions that underlie and justify the methods and questioning whether those assumptions are valid in the enterprise context in which the methods are used.

The Mechanistic-Tayloristic View, like a lot of people in the IT and Management Consultancy industries, confuses and conflates methodology with (very prescriptive) method.

I’ll quote another expert-in-the-field: “Many authors use the terms ‘method’ and ‘methodology’ interchangeably, especially in the management science and operational research communities. In my view, this is rather unfortunate: in writings on the philosophy of science, and also in some of the systems literature (see, for example, Checkland, 1981), ‘method’ and ‘methodology’ have a distinct meaning that can be most useful. A ‘method’ is a set of techniques operated in a sequence (or sometimes iteratively) to achieve a given purpose. A ‘methodology’ is the set of theoretical ideas that justifies the use of a particular method or methods. … If one wanted to be cynical, one could say that this degraded use of the term ‘methdology’ is a symptom of the ‘dumbing down’ of operational research: treating methodology as method places the theoretical and political assumptions made in the construction of methods beyond critique.” [3].

What the critique of “best practice” shows is that there is no universally applicable method for organisational improvement, no standard recipe for success, no one ‘right answer’, no sequence of techniques to be mechanistically followed that will always produce good results.

There is no universal mechanistic design for the successful enterprise that comprises unthinking people (zombies) following a set of prescribed methods in their individual functional siloes, whatever field of endeavour the enterprise happens to be in.

Nor is there a mechanistic process that could result in an assuredly-successful design. In IT terms, it is a non-computable and NP-Complete problem. This Mechanistic-Tayloristic thinking is one of the every-day mistakes and consequences of what I called “Steam Age naïve realism” in my article, The STREAMS Confluence.

But this mechanistic activity is not what professionals do. A defining feature of professional practice is what the academics call “reflective practice” and what practitioners call “continuous professional development” (CPD).

This means critically reflecting on and evaluating the methods and techniques used – in the light of both personal experience, the results achieved and the expanding knowledge-base of theory and research.

Certification is a verification of Know-That, but competence is founded on Know-How.

Being critically reflective means thinking critically about methods, and using accumulated theory, judgment, knowledge and experience to decide what methods to apply in the enterprise context – and how – to achieve the desired aims, results or goals of the enterprise.

Being critically reflective means thinking about the assumptions that underlie and justify the methods and questioning whether those assumptions are valid in the enterprise context in which the methods are used.

Being a professional, critical thinker means not applying methods because the manual, or some institutionalised process says so, but asking whether the methods are likely to be successful in context. Being a professional means consciously and deliberately selecting and applying appropriate methods and techniques ie using some methodology. Being a professional means using relevant methods and techniques from any and all commensurable methodologies, and blending them into a tailored methodology adapted to the context of the enterprise.

Every methodology worthy of the name (and not a method masquerading as a methodology) asserts the necessity of its adaptation to the local context ie that it has to be tailored to its particular usage context. This is how methodologies are an essential part of context-sensitive professional (good) practice and why I think methodology is definitive of professional practice.

So where does this leave certification, particularly EA certification? Ian Bailey and Martin Fowler are right, I think, that certification does not imply competence.

Certification is a verification of Know-That, but competence is founded on Know-How.

Certification is a badge that says “I Read the Manual”. But professionals in any field, EA included, exercise their personal responsibility to develop their own knowledge of the methods and techniques they may use.

So professionals may be expected to have not little or no certification but actually lots of certificates. Maybe two or three in EA methods and techniques and more in related areas like Programme Management, Risk Management, Systems Engineering, Governance and Compliance, etc.

But these should be set in the context of a wider personal knowledge-base that includes lots of unverified knowledge-topics. And this again should be set in a broad trans-disciplinary education spanning both technical subjects (like an engineering degree of some sort), and social/organisational subjects (like postgraduate studies in business, management, economics or organisation theory).

This education and continuous professional development, however needs to be set in the context of a history-of-praxis spanning several organisations. This needs to be done because applying ideas in the context of different organisations allows the professional to see what elements are general lessons and what are mere artefacts of local organisational culture.

Praxis is the activity of putting theory into practice – of using ideas to change the world. Developed praxis based on a wide knowledge of effective methods and techniques, ie methodology, is where transcendent, practical Know-How and competence is developed. It is the hallmark of professionalism and competence in any field.

Recruitment systems in organisations should learn how to evaluate Know-How through assessing peoples’ personal praxis-history, including their CPD history, rather than the lazy inspecting of the badges of certification – on the invalid assumption that Know-That equates to Know-How and competence. Do the recruitment systems in your organisation work like this? If not, how does your organisation ensure that it acquires the competences it needs?

Don’t just like this – share this.

Sign up to our newsletter for free


[1] Newell, S., Robertson, M., Scarborough, H., Swan, J., (2009), “Managing Knowledge Work and Innovation” (2nd Ed.), Palgrave-Macmillan.

[2] Harrington, H.J., (2004),”The Fallacy of Universal Best Practices”, Total Quality Management, Vol. 15, No. 5-6, pp. 849-858.

[3] Midgley, G., (200), “Systemic Intervention: Philosophy, Methodology, Practice”, Kluver Academic / Plenum.


Image by Verne Ho@unsplash.com

There are no comments

Add yours

This site uses Akismet to reduce spam. Learn how your comment data is processed.

freshmail.com powered your email marketing