Selection bias – the failure to study failure

My Twitter stream is full of links to articles and blog posts with titles like, “The 7 must-haves for business success,” or “The 5 things that make a successful entrepreneur,” or “10 reasons why beer drinkers make better businessmen.”

OK, maybe I exaggerated a bit with that last one but you get the picture.

This piece about selection bias by Michael Blastland on the BBC website was a useful antidote. He quotes the work of Jerker Denrell who insists that, to really understand success, we have to understand failure too. His HBR article Selection Bias and the Perils of Benchmarking is well worth reading in full. (You can get it for free if you are a CIPD member.)

Characteristics like risk-taking, confidence, willingness to act on hunches, persistence in the face of setbacks and the ability to persuade others are, he says, typical of successful entrepreneurs. Problem is, they are typical of those who fail in business too. In other words, confidence, risk taking and all the other good things may indicate that someone is more likely to start a company but it doesn’t tell us much about the likelihood of that company’s success. It becomes a truism; successful business founders have the characteristics of people who start businesses. We might as well, says Denrell, point to the fact that all successful business leaders brush their teeth.

He tells another great story about how the US Air Force in the Second World War recorded where its planes got hit most often and therefore proposed to reinforce their armour in those places. That is, until statistician Abraham Wald spotted the selection bias, pointing out that they were only looking at the survivors. The planes that got hit in these places were the ones that made it back. Wouldn’t it be better, therefore, to reinforce the planes in the areas where they hadn’t been hit?

The “inescapable logic of statistics”, says Denrell, means that managers must study failure as much as they study success:

No managers should accept a theory about business unless they can be confident that the theory’s advocates are working off an unbiased data set.

The trouble is, businesses are made up of biased data sets. Studying failures presents something of a problem for management research. We tend to airbrush failure out of corporate history. We recruit and promote the people who do well in our assessment processes and assume that they do well because we have selected for the right things. However, if we were being truly scientific we would also take on those who didn’t do well, so that we could test our processes and our definitions of good performance.

Imagine the conversation:

“We’ve got three Head of Business Unit roles coming up and three internal executives at the right level. Our High Potential Leaders’ Programme showed that Stella was a star, Mick was mediocre and Dave was a disaster. But, instead of recruiting externally, in the interests of scientific research, I appointed Stellar Stella, Mediocre Mick and Disastrous Dave to each of the roles. This way, we can test whether our HiPo programme really does identify the best leaders.”

“You’re fired!”

It’s just not going to happen is it?

Selection bias is, to an extent, inbuilt in most management research because to study managers and their behaviour is to study group of people already selected by previous definitions of merit. Focusing only on the most successful restricts the sample still further. Proper science demands that we set up control groups and test our theories to breaking point by trying to disprove them. That’s never going to happen in a corporate setting. It’s too much of a risk.

It’s all too easy to look for the attributes of the successful and claim that it is these attributes that made them successful. If they are characteristics we share, then we are even more inclined to draw such conclusions. We don’t know whether the failures share those attributes too. We don’t even know whether the failures had other attributes which might, over time, have made them successful, because we have long since dumped them by the wayside. This inevitable failure to study failure means that management can never be truly evidence based.

About these ads
This entry was posted in Uncategorized. Bookmark the permalink.

13 Responses to Selection bias – the failure to study failure

  1. Pingback: Selection bias – the failure to study failure - Rick - Member Blogs - HR Blogs - HR Space from Personnel Today and Xpert HR

  2. Sukh Pabial says:

    There is a very interesting psychometric tool, developed by the people over at Psychological Consultancy Ltd, called the HDS – Hogan Development Survey (or its other name the Hogan Dark Side). Essentially what it helps to highlight is what strengths does a person possess which can turn against them and ‘derail’ them. An example might be someone who has risen through the ranks because of their confidence and decision making ability. They then reach a point where these qualities are no longer enough to keep them successful but they, and no-one else, is doing anything about it, so the problem just gets worse. The tool presents an interesting way to ‘examine’ these high potentials with some element of rigour that is not based on a performance framework.

  3. Rick,
    A very nice piece and relevant too. Not sure if we can take consolation in it or dispair ;) On a slightly more serious note, if people want to explore the subject further there is an interesting book I recently read “Brilliant Mistakes” by Paul J. H. Schoemaker which attempts to make a case for making more mistakes.
    I would also suggest that there are, albeit very, very few businesses, at least in the I.T. sector that I have a view of, that start to understand the importance of the scientific method and apply it, if not to their management practices, to their product development. I guess Lean Startup movement would fit into that picture.
    Cheers!

    • Rick says:

      I think you are right Marcin, though Evidence Based Management is making some inroads too. I’m not against EBM, btw, I just think we need to see its limitations.

  4. It goes even further than that of course Rick – as you would I’m quite sure agree.

    Isn’t it strange how job descriptions tend to fit the ‘right’ sort of person to fit the gap between other similarly ‘right’ workers – who have often themselves drawn up the JD anyway?

    And how strange too that no-one ever seems to look to see what happens to those whom they don’t appoint!

    But in fact, I’m thinking this need no longer be the case. At more senior levels at least nearly all of us are on e.g. LinkedIn. So why not task someone every so often with checking out what happens to those who are NOT appointed to roles, to see if / how / why they (anonymously in terms of any reports) really are less ‘successful’ over the next few years than the person/s who are appointed?

    This exercise might also reveal things about the chosen JDs and their efficacy, as others’ strengths or the converse become apparent.

    It’s still not a proper random trial, or anything like it, but that study would be generally empirically based, if anyone has the resources / courage (?) to undertake it for their organisation.

    I do wonder though if this exercise, even if conducted quite properly, would actually change anything. My experience is that too often panels appoint to suit their own insecurities, needs or hunches, rather than those of the organisation.

    In fact, I sometimes wonder what the organisations actually exist to serve at all….

    Am I being much too cynical? Would proper post-hoc examination of ‘successes’ or otherwise really tell us anything that we’d want to learn from?

    • Rick says:

      Hilary, I’m planning a follow up post next week because I’ve got more to say on this. Firms often say they want spiky people and mould-breakers when really they can’t cope with such people. Often, they screen out what might have been useful mavericks.

      • Hilary says:

        Or even, especially at senior (i.e. threatening?) levels, possibly older women who have minds of their own? Risky, eh?
        And apols again for my cynicism…. wherever do I get these ideas from?

      • Indy Neogy says:

        Looking forward to that next post, Rick.

        I think there’s an interesting problem with British business. On the one had, they tend to screen out useful mavericks – on the other, when they get desperate they’ll hire a high-flyer from outside “to shake things up” and then fail to support him/her. Naturally, said interloper can’t change the culture all by themselves (who could do it alone, especially in a new corporate politics environment?) and often leaves in frustration “by mutual agreement…”

        The other thing is that blue chip companies in particular are obsessed with credentialism. To the point that good applicants who (for good reasons) attended a high class university in another country are ranked with “lower red brick” candidates or below by HR depts. Seeing a couple of examples of that before the crash, when said companies were gearing up for “a war for talent” really confirms to me some of Hilary’s cynicism…

  5. PS I should have added that JDs and subsequent appointments can also be a way for current personnel to cover their own (real or perceived, by them or others) inadequacies, without have to acknowledge that is what’s happening…. as per your initial headline.

  6. Thank you for a very interesting article.
    We live in a world that has been “contaminated” with selection choices made on biased data sets. Therefore, as you say, it is impossible to take that “contaminated” data and make an unbiased choice. We would all need to start again.
    On a different note, businesses will never conduct their activities with the scientific rigour as their aim is not to get to the bottom of things but to make a profit. The only exception I can think of: what the business makes money from is actually the getting to the bottom of things.
    Applying science is different. Majority of companies are keen to apply scientific developments provided they see business benefits.

  7. Pingback: The HR data tapes (2): My pick of the best HR data posts - XpertHR's Employment Intelligence blog - XpertHR Blogs - HR Blogs - HR Space from Personnel Today and Xpert HR

  8. Pingback: Is Success Overrated? « T Recs

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s