Category Archives: Uncategorized

Adapting to Context is great, interpretation is terrible

Let me tell you a story about the worst Sprint Review I’ve ever seen. 

I had been invited along as part of an engagement to rescue an Agile Project in serious trouble. 

It was held in a Board Style Room the kind with the long oval table and the Product Owner sat at the head of the table and the remainder of the “Scrum Team” sat at the bottom of the “U”.

There was not a computer in sight.

And then one by one, each person gave a report to the Product Owner – basically listing, in quite extraordinary detail what they’d done over the past two weeks.  Imagine a small child giving a report about “What I did over my holidays” and you have the flavour.  It was a litany of relentless activity.  I could barely keep up. Most developers were reading from quite detailed notes that they’d obviously pre-prepared.  Most timesheets aren’t this detailed!

But at the end of each speech – the PO would smile benevolently and say “Well Done, Next” – and then the next person took their turn.

This went on for well over and hour, close to two.

At the end of the meeting I had to ask “What meeting was that?”

“Sprint Review of course!  Surely it was obvious!  You’re the expert!”

“How, was, that, A Sprint Review?” I replied.

And then I was parroted back a line from some Scrum Guide or other “At Sprint Review, the Team demonstrates to the Product Owner what they accomplished”

“Software I said, Software, that they, as a team, managed to build”

“Oh, we don’t have any Software Ready, never do”

Velocity and the Sunk Cost Fallacy

2015 seems to be the year of Velocity.  For whatever reason both the haters and promoters have been out in full force for a lot of this year.

So there has been a lot of discussion about it recently in both coaching and client circles.

But one of the more surprising facts for people who are attempting Scrum is that not all work counts towards Velocity and that this is by design.

Groom to Ready, Sprint to Done

The majority of the time there are two strands of work going on inside a Scrum Team – Grooming and Delivering.

The Delivering is the part that most people are familiar and comfortable with.  Given a set of defined planned deliverables, how many were we able to complete to a pre-defined and fixed level of quality?

Scrum, being almost aggressively Agile however, only regards “Software as its measure of progress” and thus only counts Delivery of Software as progress.

Thus Grooming never really contributes positively to Velocity, but it can detract from it.1

The Upside Of The Divide

So if your skills primarily fall under what Scrum would class as “Grooming” then you may feel a little hard done by all this.  Where are your points!  You worked hard!2

Until you come to the realisation that in order to get points you have to estimate points.

Not so keen now are you?

Nobody really likes estimating, but there is a massive difference between an actual estimate “I think this can be done and I think this is roughly how long it’s going to take” and a forced estimate “I actually have absolutely no idea how long this is going to take, but you’re forcing me to give you a number, and so here you go, do you like this one? If not, I have others…”3

Much (but it has to be said, not all) of what we do in Grooming is rather open ended – Design, UX and so forth – and it also doesn’t tend to fit neatly into two week time-boxes.4

And this is precisely why Grooming doesn’t count towards velocity – it encapsulates the “fuzzy part” of Software Development.  

The stuff we have trouble quantifying upfront.  

It’s actually kind of cool.

Creating Pressure to Do Something™

So here is the interesting part.

Grooming doesn’t directly contribute to your velocity per se – but it certainly can affect it!

Think about it like this.

Imagine a Team that has a Nominal Starting Velocity of 20.

Now imagine that they only have 8 points of “Ready Backlog”

If they’re playing by the Strict Rules of Scrum5 then this means that even thou they have the “capacity” to deliver 20 points of “stuff” they only plan in 8, and thus their Velocity will only be 8!

This “sub optimal” Velocity has a simple cause6 – not enough stuff is “Ready” – and the only way increase Velocity is to get more stuff ready. (Which is what I’m assuming my hypothetical team actually spent their time doing)

Starting a Project the Scrum Way

So the upshot of this is that if you start a Project with Scrum – there is a natural pressure to stop navel gazing and start trying things.

Because it’s only when you start trying things that you’ll start generating the oh-so-popular velocity that all the kids are talking about.

If you’re working in two week sprints and spend 6 weeks “getting ready” then it’s going to look like you spent 6 weeks not delivering anything (probably because that’s exactly what’s happened – at least from an Agile Perspective)

Unless…

You don’t start doing Scrum until you’re Ready.

For a bunch of reasons a lot of places don’t use Scrum as described above.

Instead, even once they’re “post Agile” they still spend a lot of upfront time figuring out what they should do.  A lot of time “Getting Ready”7

Perhaps it’s out of habit or organisational convention, or perhaps it’s simply out of a deep rooted fear as to what an idle developer might get up to.  For the purposes of this post however, it’s sufficient to know that it’s been done.

Sunk Costs Sink Submarines

So now, imagine that instead of starting off a Scrum Project and feeling a subtle pressure to start delivering something and getting feedback on it8 instead there is a different pressure.

This pressure is the pressure to realise some value out of all the time and money that “we’ve” already spent.

If you spend 6 months and $500K before you start sprinting – then the sunk cost fallacy9 is going to drive you (or more likely your customers and stakeholders) to try and realise some value out of that exercise.  Whether or not that’s even possible.10

To make matters worse, your customer will also probably be positively allergic to non determinism and exploration by this stage; because from their point of view that nonsense should all be done by now

As such – they’re going to want to see a lot of Velocity and Story Points on everything.  We know what we want – deliver the damn thing!  Yesterday! MOAR VELOCITY!

Sailing the Seas or Hiding from the Waves

The bottom line here is that discovery and delivery are both equally important.  Delivery without discovery may well be pointless and discovery without delivery definitely is.

Scrum is far from perfect, but it does provide a mechanism whereby we can attempt to balance the two and successfully navigate our way across the choppy seas of complexity to Project Success Island.

But perhaps unsurprisingly it works best when embraced holistically and whole heartedly.  And part of that is coming to terms with embracing uncertainty navigating through it rather than trying to hide from it by ducking under the waves in the Discovery Phase Submarine.


  1. Considering that a well behaving Scrum team will never start work on Backlog Items that are Not Ready then it’s more than possible that such a Scrum team will be unable to plan a “Full Sprint” worth of work – and thus  their “velocity” is throttled by the amount of “Ready” work available. 
  2. The Problem here however is not Scrum, Velocity or even Story Points™ – the problem here is with your organisation’s understanding and misuse of Velocity. 
  3. Forced Estimates are interesting.  And by interesting I mean evil.  Because they’re just completely and utterly made up they tend to reinforce all the pre-existing stereotypes that many organisations already have regarding estimates in software development.  Most people are actually quite happy to agree to a completely new estimate if the original estimate was completely fabricated to begin with; as the objective here is clearly to supply a number, not the creation of a meaningful delivery forecast. 
  4. And trying to force it to usually ends badly. YMMV. 
  5. This phrase always makes me think of the golfing scene in Goldfinger.  Ironically, this is where James Bond cheats in order to win the game. 
  6. Which if you’ve ever been a jobbing ScrumMaster® is actually a nice change… 
  7. With the ironic position being that even after all this time, because “Scrum” has not been considered to have started, no Backlog Items are actually “Ready” yet. 
  8. Which also means starting to reveal some of our assumptions. 
  9. “Throwing Good Money After Bad” – the sunk cost fallacy or “Escalation of Commitment” is a cognitive bias which drives people to continue down a course of action simply because they have invested so much it in to date.  It is specifically a problem when the chosen course of action is now known to be the wrong one.  The fallacy itself prevents decision makers from even considering that they are taking the wrong course of action.  For a personal example – if you are at a fixed price all you can eat buffet – you are falling victim to the sunk cost fallacy if you eat well beyond your comfort level (especially if the food is not good) – because you feel the need to “eat your money’s worth”. 
  10. And guess who’ll be getting the blame if it’s not possible to realise any value… 

The Problem with ScrumMaster® as Process Police

This is a common view.  “The ScrumMaster enforces the process”.

In fact, somebody enthusiastically told me this only last week.

It’s succinct, seemingly clear and it’s also close to useless.

Leaving aside that it’s probably a gateway drug to Theory X1, consider the following:

Most Scrum implementations these days are far from “by the book”; in today’s interrupt driven corporate culture, when was the last time you saw a team genuinely “ring fenced”?

And this is the problem – how do you enforce “the process” when the process is so incompletely and informally defined?  Most organisations don’t codify the fact that “Our Scrum Teams can be interrupted whenever we feel like it” and would you want to?

What process are we enforcing here exactly?

And would you even want them to?  (Imagine a ScrumMaster refusing to let Team Member’s attend to an urgent issue in live because that would break the rules?)

And who does the ScrumMaster serve anyway? Scrum as an abstract ideal?  Or the organisation that’s paying their salary?

I say it’s time to ditch the concept of ScrumMaster as Process Policeperson and replace it with the notion that:

A ScrumMaster mindfully considers the intersection of the process and the current situation and makes an informed judgement call for which they take full responsibility.

Afterall, if all a ScrumMaster had to do was “enforce the rules” we’d hardly need a human for the job.


  1. Theory X Management states that people are inherently lazy and thus must be “made” to work.  This is in contrast with Theory Y which states that people are able to enjoy work and thus are capable of being self motivated.  Both Agile and Lean are  strongly based on Theory Y – and thus having ScrumMaster’s that subscribe to Theory X thinking is deeply problematic if you want to get the best out of Scrum. 

Misunderstandabilability

Back in, I think it was 2009, at the Lean Software Systems Conference in Long Beach California, I had the opportunity to have a brief conversation in the hallway with Barry Boehm. For those of you who don’t know the name, you may still be familiar with his most widely spread idea The Cost of Change Curve.

It looks something like this:

title

It’s most commonly used (in my experience) as a justification for Why We Must Use Waterfall.[1]

I mean, if you look at it, it’s obvious right? You want to get those requirements right! No good finding out that you’ve build the wrong thing once it gets into production! Waaay too expensive.

I gently called Barry on this; kind of / sort of blaming him for arming legions of Waterfall Enthusiasts with a deadly weapon of science and reason. After all you can talk Value until you’re blue in the face, but nobody wants their costs to go up – at least not by that much.

A strange sad expression passed briefly across Barry’s face as he asked me “What else could it mean?”

If you open you mind and look carefully at the chart again- it might just switch from being a mandate to becoming a warning.

A warning, that if you use a Waterfall Style Method, your cost of change WILL go up exponentially towards the end of the project. As it turns out, Waterfall is the cause (and for many people the cure)[2] of and for the Cost of Change Curve.

What had happened was that people (lots of people) had misunderstood the message.

I started to think about what else people had misunderstood (and as a result typically misused) in the world of methods and processes – and came up with what I called “The Misunderstandability Index”. Reflecting on the fact that (for example) Scrum rated very highly on the Misunderstandability Index, whilst the Kanban Method (again for example) ranked a great deal lower.[3]

I largely kept this idea to myself and a few colleagues until a few years later, when I was having lively conversation with Yuval Yeret at the Speakers Dinner after LKNA in Chicago.

We had been having a discussion around whether teams that had utterly failed to grasp Scrum could in fact do Kanban well, or at all. And additionally that it seemed often to be the case that Teams working in environments that were the least suited to Scrum, wanted to to do it the most.

In order to illustrate a point I brought up my story about Barry and my Misunderstandability Index. Yuval was excited, and asked me to publish something on Misunderstanability.

It’s about ease, not difficulty, in cognition

One key factor I’ve come to understand about misunderstanabilability over the last few years is that it’s about ease in cognition. Not difficulty.

Another example may help. At the same conference, Donald Reinertsen was also speaking. And he was speaking on dense economic models. Lots of maths, lots of big words. The stuff he’s famous for. Good stuff, but hard going.

In that room there was very little misunderstanding going on; largely because there was not a great deal of understanding going on. For many people it was all just too hard – but they knew it was too hard.

Misunderstanding occurs when you think you grasp something, but in fact you haven’t.[4]

It brings to mind the Twain quote:

It ain't what you don't know that gets you into trouble.
It's what you know for sure that just ain't so.

Misunderstanabilability is therefore the natural pre-disposition that an idea has towards being misunderstood.[5]

Ideas are not sufficient. If we’re constantly having to tell clients that “it’s not working because they’re doing it wrong” then maybe it’s time we stopped lecturing and started examining the level of misunderstandabilability in our messages.


  1. I personally first encountered the curve, in Project Management classes at University  ↩

  2. This is of course a cure in the same sense of the word that staying drunk is a “cure” for a hangover.  ↩

  3. The lower the score, the more the intent behind your idea is understood by others.  ↩

  4. In psychological terms, you’ve made a substitution in the first instance and are subject to confirmation bias in the second instance. It’s the second part that gets us into trouble, because once we have misunderstood, it’s very hard to suddenly start understanding.  ↩

  5. And in this way I would like to make a clear distinction between “misleading” which implies an intent to deceive and something that’s simply high in “misunderstandabilability” meaning that it’s highly susceptable to misinterpretation.  ↩

Metaphors are about communication, not truth

If a friend says to you:

“I just left my terrible job. I don’t know why I stayed so long. I guess I was just like a slowly boiled frog”

Do you get her meaning?[1]

Do you know that the metaphor is not true?[2]

Does it matter?[3]


  1. That the changes were so slow and gradual that she barely noticed them and thus stayed well past the point she should have done.  ↩

  2. The metaphor is that “If you put a frog in a pan of already boiling water, it will jump out. But if you place it in a cold pan and then slowly heat it, it will sit there calmly until it boils to death”

    Except it’s a complete myth that a frog won’t notice and allow itself to be boiled alive. The fact that this myth is so widespread however I take as a good thing, because that means there are very few people wanting to test it out. Although sadly this was not always true:

    Several experiments involving recording the reaction of frogs to slowly heated water took place in the 19th century. In 1869, while doing experiments searching for the location of the soul, German physiologist Friedrich Goltz demonstrated that a frog that has had its brain removed will remain in slowly heated water, but an intact frog attempted to escape the water when it reached 25 °C.

    The moral of which I’m guessing is you can slowly boil a frog anything as long as you remove its brain.

    Additionally the entire metaphor is silly if you think about it – because a frog dumped in boiling water would not jump out – it would die.  ↩

  3. IMHO? No. Because we’ve mangaged to convey a concept clearly and concisely – typically avoiding all the nonsense contained in fn2 above.

    Bottom line here is that if people are using a metaphor to describe something to you – focus on the communication aspect rather than picking apart the factual accuracy of the thing.  ↩

Best for me, best for now (Practice)

In my last post, I covered the concept of “Best for Me” Practice.

In this post I want to extend that a little further into “Best for Me, Best for Now” (Practice).

The Tyranny of “Best”

“Best Practice” is often held up as a gold standard – something to aspire to, rather than to do.[1]

This leads broadly to two dysfunctions:

  1. Aiming too high
  2. Aiming too low

Aiming too high is akin to somebody who desires to increase their fitness through running and reads that “best practice” for fitness running is to run 5km, in under 45 minutes three or more times a week. They walk out onto the street, never having run a step in their life and hop to it.

Aiming too low is akin to somebody who reads the same advice and decides that “well, given that I’m 80kg overweight and struggle to walk down to the shops, I could never do that.”

Neither case is likely to ever reap the benefit. (And if it’s not obvious, aiming too high can rapidly lead to aiming too low, with a short bout of depression and self loathing in the middle)

Focussing on Outcomes, not Activities

Ultimately the real problem with Best Practice, is that it places the focus on the means and not the ends. There is a massive leap of faith that by doing this, we’ll get that. And if you’re not getting this fast enough (or at all) then just do that harder.

Rather than trying to run 15km a week, both our hypothetical couch potatoes would be better off taking on board a “Best for Me, Best for Now” approach.

The outcome they want is a longer healthier life; and the best first step is probably along the lines of:

  1. Walking a little more than they currently do
  2. Making some dietary changes
  3. Adding other physical activities that they enjoy and will sustain and won’t cause injury [2]
  4. Tracking some metrics – looking for correlations[3]

“Best” Practices, like MVP’s[4] should be considered as hypothesis

If you did those four things – then you would rapidly[5] become a different person with a different capability (and potentially even different goals).

And thus Best for you, Best for now would change.

And that’s a good thing.


  1. And if it is seen as something to do, then often the question turns to why aren’t you doing it now?!  ↩

  2. When you focus on the outcome, you free yourself up to explore other alternate paths to reaching it. Running is a way to get exercise, it’s not the only way. Considering that when it comes to weight loss, exercise is more about improving mood than it is about burning calories, it’s doubly pointless to do something that you don’t enjoy.  ↩

  3. Perhaps measuring this with a FitBit and some bathroom scales. Noticing that on the weeks they walk more, they lose or at the very least maintain their weight and feel better to boot.  ↩

  4. Minimal Viable Product  ↩

  5. OK, this would depend also on your age, starting weight and whether or not you had any pre-existing glandular or metabolic conditions, but hopefully you get my point.  ↩

Best (for me) Practice

What do we mean when we speak of Best Practice?

Best Practice is a rather polarising term. Very few people are neutral about it. It’s either the watchword of an organisation or an instant irritant. But what does it actually mean?

Genuine Best Practice

As previously discussed, in Authentically Simple Systems – the term “Best Practice” is entirely valid. Given the current body of knowledge, there is One Best Way to perform this task.

To do otherwise would be, by definition, either inefficient or unecessarily prone to error.

But this term is used widely outside of the Simple Domain; and not ironically.

Best (for me) Practice

A lot of the time “Best Practice” is really just a shorthand for “Best for Me” Practice.

But this shorthand can actually hide two completely different intents…

Best (for me) Practice, given that I don’t want to take responsibility for my own actions

This is, at least for me, is the darker and more frustrating meaning I’ve encountered.

It often equates to “just tell me what to do”, with the subtle undertone of:

“If this doesn’t work, then it’s not my fault, because you told me it would work. I didn’t fail, the practice did. And so did you for recommending it.”[1]

Best (for me) Practice, given my context and your knowledge

If you’re not a Complexity Geek, then you’re probably not as hung up on the term “Best Practice” as much as some people are.[2] To you it infers something actually far more wooly, subtle and frankly more complex[3]:

“Given everything you’ve seen about our situation so far, combined with everything you know and your general experience, what are the best recommendations you have for us?”

Which to me, seems fair enough. Even if I still rankle occasionally at the term.

So how can we tell the difference? (And also avoid a pointless argument about nomenclature?)

Practice or Principle?

One aspect of the concept of “Best Practice” that is I think universal (regardless of the meaning you adhere to the term), is that a Best Practice is something that’s been tried before by somebody else[4]

The logic is sound (if not overly courageous):

Somebody else has already taken this risk, figured this out and now I’m going to reap all the rewards, while taking none of the risk.

This all seems fair and theoretically puts a new spin on our two “Best for Me” cases above, effectively merging them. Both are simply risk adverse and wanting help.

But now we’re back to square one – because it’s only for systems that live in the Simple Space that cause and effect is infinitely repeatable.[5]

And if that’s true, then Best Practice may not be as simple as it seems after all[6]

Fundamental Attribution Error and confusing correlation with causality

Just because somebody has tried something before and they were successful (even repeatedly so), doesn’t mean that it was those practices in particular that caused (or even contributed) to their success.[7]

As science progresses, every day we’re discovering new evidence that much of what we considered causal is now correlated at best, and in some cases completely unrelated.[8]

Best Principles? (or how to tell the difference)

And this is how you can tell the difference between a “Best for me in my context” and a “Best for not taking any responsibility” intent.

Those folks who are genuinely interested in better outcomes for their problems in their context will take on board the concept of Best Principles – Principles and heuristics that successful organisations use in order to develop their own (evolving set of) Best Practices.

And those who are interested only in shirking responsibility? They’ll listen politely and then quietly insist that you tell them what to do.[9]


  1. You could argue that this is precisely why branded methods are so popular. They’re less about buying a solution than they are about acquiring a scapegoat.  ↩

  2. Also, well done for reading this far.  ↩

  3. Ironic no?  ↩

  4. And hopefully shown to work. because otherwise we’ll have to class “Tilting at Windmills” as Best Practice too.  ↩

  5. To make this clearer at the very least you’d have to be doing exactly what the other organisation was doing – in which case I would question where your competitive advantage was coming from. But even then, you’re almost certainly doing it with a different set of people.  ↩

  6. Pun partially intended.  ↩

  7. I’ve met the odd entrepreneur that attributes Apple’s success directly to the less cuddly parts of Steve Job’s personality; apparently nobody else at Apple does jack squat.  ↩

  8. By which I mean there is no scientific evidence to support any kind of relationship whatsoever; however likely or plausible it might seem to the layperson that there is a link. This seems especially true of dietary advice.  ↩

  9. Whether or not this is because of an innate character flaw or simply that this is what they’ve been incentivised to do is a topic for another day.  ↩

Using Narrative as an alternative to rules of Best Practice

In my last post, I spoke about the limitations of Best Practice.

I can imagine that some of you may be thinking:

“Well Mr Bennett, it’s all well and good to poke holes in rules of Best Practice, but what do you suggest we do instead!?  We can’t just have a free for all!  How will people know what to do?”

A free for all is not at all what I’m suggesting.  In fact far from it.  The deep and lasting irony about using Best Practice outside of its appropriate domain of use is that you get less actual control as people game the system to get around it.

Let’s use a concrete example to illustrate the point I’m talking about.

You might for example be using a process control framework (let’s call it Scrum) – and you can have a simple rule:

Rule A) “Only the Product Owner may abnormally terminate a Sprint”

OR

Rule B) “Only the Team may abnormally terminate a Sprint”

OR

Rule C) “Either the Team or the Product Owner may abnormally terminate a Sprint”

There have been passionate discussions (mostly online) as to which one of these rules is “right”, and evidence of one kind or another exists to support all of them.  Clearly, as written, they cannot all be right.

Which one is Best Practice?

Rules are used to guide behaviour and also provide an objective justification for punitive action towards rule breakers.

In our society we are all aware of the punitive actions associated with our criminal justice laws.

But terminating a sprint is a process rule, not a criminal justice law.  If a criminal justice law is broken, it is used in a court of law in order to provide guidance as to whether the rule was in fact broken and what the appropriate punishment should be.  When was the last time this happened on your project?

Process laws are therefore less about punishment and more about guidance.  But how suitable are rules for this purpose?

Rules like these unfortunately offer very little guidance for complex situations.   And the more general they are, the less guidance they seem to offer.  You could easily argue that Rule C could be replaced by “Anybody except the ScrumMaster may terminate a Sprint” – it may sound silly, but really, what is the effective difference between that statement and the rule as described?

The problem with rules is that to be clearly enforceable, they need to be pretty specific – which is OK when your rule genuinely is “Best Practice” because everything else apart from following the letter is going to be an undesirable outcome.  But if anything less than perfect is your aim, then simple statements such as “Only the Product Owner may terminate Sprints” begin to lose their utility and we instead direct people towards a “not terrible, but not great either outcome” – in many cases like these, the focus tends to be not on excellence but instead on preventing horrible failure.  The problem with this approach should be obvious, you drag down the great outcomes to protect yourself from the bad.  Maybe it’s me, but I don’t see a competitive advantage in that approach.

Narrative is a good option – in these series of narrative fragments I have provided an alternative to simple rules – and one in a very human form, one in which our race has been learning and passing on knowledge for centuries.

Instead of a single point, instead there is a rich tapestry of guidance and learning.  One in which the team and organisation can add to themselves over time as they share their successes and failures.

Rules & Narrative Fragments

Best Practice is not a panacea

Best Practice is a term that’s bandied around a lot these days.

It ascribes to wishful thinking, and speaks to our fundamental human fear of the true nature of reality.

The very idea that there is “one right solution” helps provide the illusion that the world is a well ordered place.  Or at the very least could be, if we just tried hard enough.

However, as students of complexity are well aware, the only legitimate domain to apply “Best Practice” is in the Simple Domain.  This is a domain where Cause and Effect are obvious to all, and also endlessly repeatable. (And as such prime candidates for codification)

607px Cynefin framework Feb 2011

But most of life, especially knowledge work, does not fall into this appealingly controllable space, as much as we might like it to.

To increase our feeling of control, we often like to codify our notions of Best Practice into rules.  After all, if something is truly “best” then all other options are inferior and should therefore be avoided.

We then distribute these rules out to the population of humans whose behaviour we wish to control and sleep soundly knowing that our job is done and the world will be a better place from now on.

Because of course, everybody will follow the rules, and thus both their behaviour and the outcomes that they produce will therefore be “optimal”.

But what if we don’t trust people to follow our rules?

Why then, for the people’s own good (remember we we’re talking about the one best way here) we select some people that we do trust and have them enforce both our rules and their precious Best Practice payloads.

In the average corporation we might call these Best Practices by other names such as “policies”, “rules” or “procedures” and we call our trusted caste “managers” which we can then arrange in a hierarchy so that spatial position corresponds to the level of trust we place in each of them.  Everything is now ship shape and Bristol Fashion!

However, as I’m sure we’ve all experienced first hand.  The human mind likes to game the system.  We do so for all manner of reason: for fun, for malice, for personal gain and oftentimes just to get our job done.

The map does not always match the territory.  Maybe the rule is outdated, inapplicable to our current circumstance or simply badly or meanly written. Regardless, if it conflicts with our superordinate purpose then it gets in the way of the creation of value and betterment.

Regardless of the motivation, the instant a rule is created, people will be looking for ways to get around it, but without risking censure.

The unspoken mandate then becomes to follow the letter, but not the spirit of the rule, and therefore the same applies to the application of the “best” practice it was designed to engender.

This can lead to an arms race between the rule makers and the rule breakers. And so, rather than simply having one rule to follow, we have many.  Each describing the “Best Practice” version of the events that our incoveniently highly variable universe has chosen to throw at us.  Each created in retrospect for a situation which may never arise again.

Which brings us all the way back to the concept of attribution error that I spoke about here.

The rule makers’ attribution error is almost certainly that they believe that every problem is Simple and that Best Practice is therefore universally applicable to every situation.

If that’s your belief, then when you discover that the rules are not in fact engendering your desired outcomes, your reaction to fixing the situation is almost certainly to add more rules, or find ways to enforce the ones you have.  It will keep you busy, that’s for sure, it might even keep you entertained.

But it’s not going to give you better results.

If it’s the wrong tool for the job, it’s irrelevant how skilled you are at using it.

You cannot fix a broken watch with a chainsaw.

But once you see the world for what it is.  That it can be complicated, complex and chaotic, then you can achieve success through applying methods appropriate for managing those domains and give your rulebook a much needed rest.

Attribution Errors and the Importance of Context

I recently read a fascinating study that was done in the 1950’s by a physchoanlyst named Allen Wheelis

What Wheelis observed during the 50’s was that classic Freudian Analysis techniques were no longer working as often as they used to.

Simply put: The number of people who were “cured” by these techniques was significantly lower than had previously been the case and it was getting worse.

Leaving aside, for the moment, the large portions of Freud’s work that have now been discredited, lets examine the core theory behind why something that had previously been empirically shown to work no longer did.

A Different Time

Freud’s work was anchored in the Victorian Era, an era and a culture strongly dominated by very strong wills, opinions and morals.  It was very much a culture of “character” and “self discipline”.

Freudian therapies reflected this situation as they focussed very heavily on ways in which one might  break through the mental barriers that these strongly opinionated and disciplined individuals had errected and thus reveal to themselves the cause of their neurosis.

Once the said cause was revealed, the Victorian ethos of self discipline would swing into action and work diligently towards correcting “the unsightly defects of their minds”.  And apparently (I wasn’t there) this worked, at least enough of the time to be regarded as a “reasonable approach to the problem” – “empirically proven in the field” if you will.

Oh noes!

By the 1950’s however, Freudian Analysis was beginning to fail, and fail and fail again.

Why?

We now know that the reason is most likely the fact that the personality of the average individual had changed quite markedly by the 1950’s.  People were (at least compared to the Victorians) far more relaxed, open and introspective.

As such they achieved insight into the source of their problems far more quickly than their Victorian predecessors ever did.

But once they had discovered the fundamental cause of their woes, unlike the Victorians, they did not have the self discipline and strength of character to follow through on their discoveries to improve their mental situation.  And Freudian techniques, which were effectively designed for a different personality type, were basically ineffective in strengthening self discipline and “building character”

Bummer.

Well there’s your problem…

So how did the majority of the professional community react to this?

In two really interesting ways.

First of all, they congratulated themselves for being so damn clever and talented!

Why?  Because the first phase of Freudian Psychoanalysis was positively rocketing along, that’s why.

They attributed this not to some fundamental shift in the character of their patients, but rather to a combination of the advancement that they and their contemporaries had made to the current body of knowledge and also, of course to their inherent natural talent and skill.

The second reaction, was keyed to and influenced by the first.

“Well, we know that we’re awesome, so the problem must be with Freud’s theories, they must be wrong”

Now, as it turns out they were right, at least partially; but they were led to the right conclusion for the wrong reasons.  And every time that happens, your success is actually based more on luck than on good management.

So what does this mean for Agile? Lean? Complexity? CALM?

Context is important.  And knowing why something works is also important.  It’s not always enough to see that something seems to work in practice now, and thus assume it will always work in the future.  The world changes, people change and will (I hope) continue to change.

And if this is true, then so do our approaches need to change and evolve with us.

Just because something worked once or twice, maybe 5, 10 or 20 years ago; for somebody else in a similar set of circumstances to you does not mean that if doing the exact same thing doesn’t work for you in 2012 that the reason for your failure is that “you must be doing it wrong”

If you’re working solely at the practice level, then that could easily be an attribution error, and a potentially costly one at that.

What your apparent failure might mean is that your context is sufficiently different to the previous success story, that approach selected is never going to work, no matter how well you do it.

Which again is why I’m going to state again that theory informed practice is so vital.  If we understand the principles (or at least have usable models which we are willing to throw away once we have better ones) behind why certain practices succeed or fail, then we can operate at the principle level both to knowledgeably apply appropriate practices to our work as well as synthesise and create new ones in order to confidently and effectively solve our problems in our contexts.  And maybe even advance the state of the art.