If more managers are looking for evidence before introducing something new to their organisations, this is an improvement. For too long, executives involved in the management and development of people tended to seize on every fad and quack remedy that came onto the market. Provided it had a set of impressive tools, lots of easy-to-follow lists, simple categories into which you could slot people, a flashy slide presentation and an aeroplane-read book to back it up, managers bought it. Scientific evidence rarely came into the mix. In fact, a bit of new-age mumbo-jumbo was more likely to impress in certain quarters than mere science. If we are moving away from that and starting to look at some hard facts before jumping onto the latest band-waggon, that can only be a Good Thing.
But can management ever really be evidence-based in the way that science based disciplines such as medicine can?
Scientific research is governed by a rigorous set of requirements; for example new research must be testable and repeatable. The problem for any research on management is that the conditions in a workplace are constantly changing, so repeating an experiment even with the same people on a different day might yield different results because other organisational factors could be contaminating the environment. It is therefore almost impossible to repeat any experiment in organisational behaviour in the same way that an experiment could be repeated in a laboratory. As time moves on, so do people. They might be the same people but they will have been influenced by any number of events between one experiment and the next.
The same applies to organisations. Because each organisation is made up of different actors, an intervention in one might yield different results from a similar intervention in another. This is why management research is more difficult to generalise. For example, a bonus scheme that motivates people in one organisation might demotivate them in another.
And, of course, there is the good old Hawthorne Effect. Sometimes, people just change their behaviour because they are being watched. In such cases, a researcher can’t be sure whether it was his intervention that caused the change or just the fact that he was watching. As Adrian Furnham says, “to observe is to disturb.” A researcher changes an organisation just by being there. As soon as he walks through the door, the organisation is no longer the same place. Contamination of the research environment is almost inevitable in any behavioural experiment.
Rigorous scientific experiments also require control groups. This presents a further problem for management research. To assess the effectiveness of an intervention, an organisation would have to use it for some staff while excluding others and do so repeatedly until the behavioural outcomes could be conclusively proved to come from the intervention. All fine in theory but the excluded groups would almost certainly complain about being excluded.
Furthermore, live management research suffers from restriction of range because firms only take on people they think are good. For example, to evaluate recruitment tools to scientific standards, managers would have to hire people who failed the process and measure their performance too. Only then could it be conclusively demonstrated that good scores predict good performance and poor scores predict poor performance. Again, few organisations are going to take that risk for the sake of scientific experimentation.
For this reason, a lot of research in organisational behaviour uses groups of students. This study on incentive payments quoted recently by Aditya Chakrabortty is a good example. It tells us how students from Chicago and MIT and a group of villagers in rural India responded to incentive payments. All very interesting and it sheds some light on how people behave when the pressure of high rewards is piled on. Does it tell us how people in Bank X or Government Department Y will respond to an incentive scheme? Not really.
Research in management, then, can never reach the same levels of scientific rigour as, say, medical and pharmaceutical research. The clean research environments, control groups and repeatable and testable results that would be demanded from scientific research are simply unachievable in the field of organisational behaviour.
Management research is not an exact science. Unlike a doctor, who can predict with near-certainty how a human body will respond to a certain treatment, a manager or occupational psychologist can never be entirely sure how people will respond to a particular organisational intervention. Much as managers would love to be able to pull lever X and be sure of result Y, people and organisations don’t work like that. If ever I create a management tool with repeatable and near-certain results, I will let you know – from my tax-haven on the Cayman Islands.
None of this is to say that managers should not look for evidence before they develop new practices. The criticism that much of what is claimed to be leading-edge management has no evidence base at all is well founded. In fairness, advocates of EBMgt like Rob Briner don’t suggest that managers should look for conclusive evidence just that they should pay a lot more attention to research before they introduce new techniques into their organisations.
Perhaps we should think of the difference between evidence-based management and science-based disciplines such as medicine in similar terms to the difference between civil and criminal law. Medicine requires ‘beyond all reasonable doubt’, for evidence-based management, the balance of probabilities will do.
If we don’t adopt such a view, evidence-based management could become yet another excuse for not taking action. The public sector, especially, suffers from analysis paralysis and the postponement of decisions until more data can be gathered. Anything that reinforces such tendencies should be resisted. The futile search for conclusive proof could be used by some people to put off important decisions indefinitely. Combine the requirement for evidence with the legalistic internal processes that bedevil some parts of the public sector and inertia would rule for ever.
A ‘What If’ piece in yesterday’s XpertHR from Paul Kearns shows what could happen if managers were to become too purist about EBMgt. His elegant ‘reductio ad absurdam’ describes a dystopia in which executives decisions and membership of professional management bodies are subject to the same disciplines as those which apply to medical profession. In this world, the evidence behind all management techniques would have to “satisfy a similar standard to the process of new drug trials in the pharmaceutical industry”. Of course, no management techniques would ever pass such a test, therefore nothing would ever get done and no-one would ever get accredited to Paul’s General HRM Council. Which is, I think, the point he is trying to make. Take something to an absurd conclusion and you get absurd results.
Management is driven by fashions and fads. More emphasis on gathering evidence before rushing to implement the latest technique would save organisations a lot of unnecessary expense. That said, certainty and predictability in management research, or indeed any research into human behaviour, are elusive. The best managers can hope for is to make educated and informed decisions on the balance of probability. Even though research into people management is becoming ever more sophisticated it will never be an exact science. Management will, as it always has, require a combination of clever analysis, good relationships, sound judgement and a lot of good luck.