“How reliable are the social sciences?” asks Notre Dame University’s Gary Gutting.
The case for a negative answer lies in the predictive power of the core natural sciences compared with even the most highly developed social sciences. Social sciences may be surrounded by the “paraphernalia” of the natural sciences, such as technical terminology, mathematical equations, empirical data and even carefully designed experiments. But when it comes to generating reliable scientific knowledge, there is nothing more important than frequent and detailed predictions of future events. We may have a theory that explains all the known data, but that may be just the result of our having fitted the theory to that data. The strongest support for a theory comes from its ability to correctly predict data that it was not designed to explain.
While the physical sciences produce many detailed and precise predictions, the social sciences do not. The reason is that such predictions almost always require randomized controlled experiments, which are seldom possible when people are involved. For one thing, we are too complex: our behavior depends on an enormous number of tightly interconnected variables that are extraordinarily difficult to distinguish and study separately. Also, moral considerations forbid manipulating humans the way we do inanimate objects. As a result, most social science research falls far short of the natural sciences’ standard of controlled experiments.
This is not really surprising given that a lot of what goes under the broad umbrella of business studies is simply social sciences, especially psychology, economics with a bit of sociology and anthropology, applied to an organisational setting.
Like the social sciences, management research will never have the same reliability as ‘proper’ science. But we’d like it to, which is why we dress it up in scientific clothes, with lots of graphs, calculations and equations.
I’ve sat in assessment centres where people have gone to great lengths to add up all the carefully weighted scores from the various exercises only to discover that the numbers give them the ‘wrong’ answer. The candidate that everyone likes is not the winner. Now clearly, if the preferred candidate is way adrift of the winner you need to ask some questions about your collective judgement but if there are only a couple of points in it does it really matter?
What people often forget is that they have taken subjective judgements, in this case scores for exercises, and turned them into numbers. What is more, even the assessment instruments they have used, while being supported by considerable bodies of research, suffer from the limitations of social science described above. They too contain an element of subjectivity. It makes no sense, therefore, to use such data as if it were quantitative. It starts off being subjective and, even if we put lots of numbers and formulae around it, it is still subjective.
Where people are involved there will never be accuracy. That isn’t to say we should ignore behavioural science and management research completely. The insights and tools that come out of it give us wider ranges of data upon which we can make our decisions.
But it’s never going to be ‘right’ – not in the scientific sense. It’s rather like the difference between criminal and civil law. Science-based disciplines like medicine and pharmacy require ‘beyond all reasonable doubt’. For social sciences and management research, the balance of probability is about as good as it will ever get.