How reliable are the social sciences?

How reliable are the social sciences?” asks Notre Dame University’s Gary Gutting.

Not very.

The case for a negative answer lies in the predictive power of the core natural sciences compared with even the most highly developed social sciences.  Social sciences may be surrounded by the “paraphernalia” of the natural sciences, such as technical terminology, mathematical equations, empirical data and even carefully designed experiments.  But when it comes to generating reliable scientific knowledge, there is nothing more important than frequent and detailed predictions of future events.  We may have a theory that explains all the known data, but that may be just the result of our having fitted the theory to that data.  The strongest support for a theory comes from its ability to correctly predict data that it was not designed to explain.

While the physical sciences produce many detailed and precise predictions, the social sciences do not.  The reason is that such predictions almost always require randomized controlled experiments, which are seldom possible when people are involved.  For one thing, we are too complex: our behavior depends on an enormous number of tightly interconnected variables that are extraordinarily difficult to  distinguish and study separately.   Also, moral considerations forbid manipulating humans the way we do inanimate objects.   As a result, most social science research falls far short of the natural sciences’ standard of controlled experiments.

Which is pretty much what I was arguing here and here about the study of management.

This is not really surprising given that a lot of what goes under the broad umbrella of business studies is simply social sciences, especially psychology, economics with a bit of sociology and anthropology, applied to an organisational setting.

Like the social sciences, management research will never have the same reliability as ‘proper’ science. But we’d like it to, which is why we dress it up in scientific clothes, with lots of graphs, calculations and equations.

I’ve sat in assessment centres where people have gone to great lengths to add up all the carefully weighted scores from the various exercises only to discover that the numbers give them the ‘wrong’ answer. The candidate that everyone likes is not the winner. Now clearly, if the preferred candidate is way adrift of the winner you need to ask some questions about your collective judgement but if there are only a couple of points in it does it really matter?

What people often forget is that they have taken subjective judgements, in this case scores for exercises, and turned them into numbers. What is more, even the assessment instruments they have used, while being supported by considerable bodies of research, suffer from the limitations of social science described above. They too contain an element of subjectivity. It makes no sense, therefore, to use such data as if it were quantitative. It starts off being subjective and, even if we put lots of numbers and formulae around it, it is still subjective.

Where people are involved there will never be accuracy. That isn’t to say we should ignore behavioural science and management research completely. The insights and tools that come out of it give us wider ranges of data upon which we can make our decisions.

But it’s never going to be ‘right’ – not in the scientific sense. It’s rather like the difference between criminal and civil law. Science-based disciplines like medicine and pharmacy require ‘beyond all reasonable doubt’. For social sciences and management research, the balance of probability is about as good as it will ever get.

This entry was posted in Uncategorized. Bookmark the permalink.

9 Responses to How reliable are the social sciences?

  1. Pingback: How reliable are the social sciences? - Rick - Member Blogs - HR Blogs - HR Space from Personnel Today and Xpert HR

  2. B.O. Locks says:


    I think it may be possible to model social systems under various assumptions about human behaviour (parameters). The uncertainty and heterogeneity of behaviour, if and where it exists, can be modelled with statistical distributions. Once successful models have been formulated they can be tested using computer simulation software. Computer simulation allows repeated controlled experiments to be performed, thus avoiding the moral, ethical, and practical issues that arise from experiments with real people. The simulations may lead to conclusions that some behaviours can be predicted and are identical whatever the parameters and initial conditions of the model. So I am not sure it is correct to say that the application of scientific method to social science does not bear worthwhile fruit.

    I agree that recruitment decisions, in an attempt to produce objective outcomes, based on numbers that have been attached to subjective judgements may produce sub-optimal results. However, such a methodology may be useful in other contexts, such as determining a an individual decision maker’s attitude to risk.

  3. Simon Alford says:

    I think you may be exaggerating the accuracy of the natural sciences. Take Engineering as an example , Engineers often do their calculations then whack on a 100% safety factor. Also the tendency of the Social Sciences to “mathematise” themselves may generate spurious accuracy. Maybe it detracts from understanding. Financial models also use some of these techniques , and we’ve seen what can happen with those. Where the model is the market.

  4. DM says:

    Just a little note. The natural sciences rely on reproducibility, independently from time. A sample of lead behaves the same in the 19th century and in the 21th century. In contrast, a theory about 19th century economies cannot be tested, simply because the world has evolved and it is impossible to recreate 19th century society just for the sake of experiments. At the same time, it would seem overly ambitious to create a theory of economics that would work for any kind of society (agricultural, industrial, post-industrial), any political regime, etc.

    Furthermore, if a theory is true, publishing it can alter reality so that it is not true anymore; whereas in the natural sciences, the universe does not change its laws according to what has been published. I heard of an example regarding the relationship between inflation and unemployment (maybe I’m wrong on this example) : once the theory linking them had been published, trade unions began anticipating on its predictions and thus requested (and obtained) salary hikes which broke the relationship.

    But, right, as an economist friend of mine said: with enough parameters in a model, you can fit anything.

    • B.O. Locks says:

      Yes, when a social scientist discovers a relation between things and the result is announced then the social actors may well change their behaviours, thus altering the discovered relationship.

      This is closely analogous to Heizenberg’s Uncertainty Principle, a principle taken from the natural science of Physics, which says that the observer affects the experiment, just as it does in many social science scenarios. An example is an OFSTED inspection of a school. The presence of the inspectors produces a change in teacher performance for the duration of the inspection and so it becomes difficult for the inspectors to know how good the teaching is ordinarily. Anyway, my point is that observer and observed are not always independent in the natural sciences either.

      Whether social phenomena is amenable to scientific methodology is moot. Governments enact laws and judges may hand down harsh sentences to deter transgression. Deterrent sentences would be pointless if they had no predictive value. Can it be ascertained whether deterrent sentences work? The answer must be yes – the incidence of the crime can be measured before and after the deterrent sentences were handed down. Of course, other factors must be held constant and this is often a major impediment in ascertaining whether a social measure has worked or not. The deterrent effect of hanging is hard to ascertain one way or another due to other social changes over the period of its abolition.

      Some social phenomena does remain constant over time, just as in the natural sciences. These phenomena include the human will to live, to reproduce, etc. These constants, together with an assumption of human rationality, should enable social scientists and social psychologists to make fairly robust predictions of how the majority of human beings will react to changes to their social and economic environments. For example, what would be the effect of removing welfare benefit payments from all who currently receive them? Can the effects be predicted? I would say yes, through the use of careful modelling and computer simulation.

  5. Tim Newman says:

    “Take Engineering as an example , Engineers often do their calculations then whack on a 100% safety factor.”

    I’m not sure about that. The factors of safety are usually stipulated in the design codes, e.g. ASME, etc. Any engineer adding on a factor of safety of 100% each time is going to be costing his company a lot of money.

  6. Tim Newman says:

    Incidentally, it’s also the reason why practitioners of the hard scientists and engineering can work abroad so easily: the first principles of engineering, and indeed much of the detail, is the same in Russia as it is in the UK which is the same as in France. It’s a lot harder to practice the soft sciences across borders.

  7. Dipper says:

    Was struck strongly by a comment in “A Bright Shining Lie” which is about The Vietnam War. The USA had previously quelled communist rebellions in the Philippines. “[Gen] Landsale was a victim in Vietnam of his success in the Philippines. Men who succeed at an enterprise of great moment often tie a snare for themselves by assuming that they have discovered some universal truth.”

    Which I think is another way of saying the above.

    Maybe we are mislead by the word “Science”. Perhaps the purpose of social sciences is the same as Dillow argues about Economics; not to predict the future, but to explain how we got here.

  8. Pingback: The evidence-itis epidemic — Writing for Leaders

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s