What’s the deal with ‘evidence-based practice’? Aren’t we all evidence based?

To an extent, yes - we all use evidence. But as we argue in our positioning paper, In search of the best available evidence, there are two big things we tend to get wrong when using evidence to inform decisions.

First, we’re often not great at gauging the quality of evidence we’re looking at. There is a well-established hierarchy of scientific evidence on cause-and-effect relationships. The ‘gold standard’ is randomised controlled trials, which carry a good deal more weight than, say, simple before-and-after studies, and far more than surveys run at one point in time. If we can take note of this hierarchy in looking at evidence, we are well on the way to making more reliable, better decisions.

Second, we tend to cherry pick evidence that supports our pet theory. It feels great when you find a piece of research that confirms what you long suspected. But barring the ridiculous, the chances are you’ll be able to find research – even good quality research – to back your opinion whatever it is. To find out if a technique is really worth replicating, we should look at the wider body of evidence. So sitting above the hierarchy of single studies, we have a ‘platinum standard’ of systematic reviews and meta-analyses.

Evidence-based HR means being anti-fad and willing to face uncomfortable truths. But it’s hugely important. Relying on weak evidence can feel politically expedient (staying in line with what your boss expects to see) or compelling (in tune with the zeitgeist, intuitively right), yet at its worst it gives little more than a 50% chance of success, the equivalent of flipping a coin. If a decision is important, it’s worth testing the available evidence in this light: how much more scientific is it than a coin toss?

There are plenty of thorny questions in evidence-based HR but the basic principles are pretty simple and more importantly, worth striving for. Our hope is that this forum will help put people these principles to work and grapple with the challenges. Thoughts?

Parents
  • "Relying on weak evidence can feel politically expedient (staying in line with what your boss expects to see) or compelling (in tune with the zeitgeist, intuitively right), yet at its worst it gives little more than a 50% chance of success, the equivalent of flipping a coin." Out of curiosity, what is the evidence for this?
  • Great question! The below is commonly used to assess how likely evidence from different research methods is to predict a cause-and-effect relationship. So long as it's good quality - if the method is followed poorly, the trustworthiness gets downgraded. So, cross-sectional surveys with a few serious flaws, or qualitative case studies with a couple, are little better at predicting your future than tossing a coin. If you're interested, have a look at this practical research guide https://www.cebma.org/wp-content/uploads/CEBMa-REA-Guideline.pdf or this paper for a deeper read on the thinking & evidence behind this sort of thing http://www.cebma.org/wp-content/uploads/rousseau-et-al-evidence-in-management-and-org-science.pdf . 

    Rob B (and Eric Barends too if you're there) - what's the reference for this table? Is it the Petticrew or Shadish books? 

    Methodological Appropriateness For Cause-and-Effect Questions 
    Design Appropriateness Level Trustworthiness
    Systematic review or meta-analysis of randomized controlled studies Very high A+ 95%
    Systematic review or meta-analysis of controlled and/or before-after studies High A 90%
    Randomized controlled study High A 90%
    Systematic review or meta-analysis of cross-sectional studies Moderate B 80%
    Non randomized controlled before-after study Moderate B 80%
    Interrupted time series study Moderate B 80%
    Controlled study without a pretest or uncontrolled study with a pretest Limited C 70%
    Cross-sectional study Low D 60%
    Qualitative study Very Low D- 55%
  • You've pushed the boundaries of 'formatting' with this post, Jonny! ;)
  • I just pasted it in, bing bang bosh. Surprised how smart it looked
  • Well, that hierarchy is quite common and is specifically for looking at intervention studies.  In general I think it's much easier when thinking about cause and effect to consider how any study was conducted. So in addition to having good measures and a sensible sample and method these broadly speaking are the conditions you want to look for that tell you whether in principle causality can be inferred (not proven of course).  So if you were interested in finding studies that told you something about the causal relationship between engagement and performance, say, you'd look for studies in which:

    1. The change (if any) in engagement happened before  a change in performance.
    2. There is covariation between engagement and performance such that when engagement goes up, performance goes up - and when engagement goes down, performance goes down.
    3. There are no other plausible explanations of the relationship.  In this case it could be, for example, the 'quality' of line management that when 'good' both increases engagement and performance but engagement us causally unrelated to performance.

    Some more analysis of this example here: http://engageforsuccess.org/wp-content/uploads/2015/09/Rob-Briner.pdf

    Of course, one study doesn't tell you anything much anyway - so what you'd also take into account is how many studies are around that meet these design criteria and what their results, collectively, show

Reply
  • Well, that hierarchy is quite common and is specifically for looking at intervention studies.  In general I think it's much easier when thinking about cause and effect to consider how any study was conducted. So in addition to having good measures and a sensible sample and method these broadly speaking are the conditions you want to look for that tell you whether in principle causality can be inferred (not proven of course).  So if you were interested in finding studies that told you something about the causal relationship between engagement and performance, say, you'd look for studies in which:

    1. The change (if any) in engagement happened before  a change in performance.
    2. There is covariation between engagement and performance such that when engagement goes up, performance goes up - and when engagement goes down, performance goes down.
    3. There are no other plausible explanations of the relationship.  In this case it could be, for example, the 'quality' of line management that when 'good' both increases engagement and performance but engagement us causally unrelated to performance.

    Some more analysis of this example here: http://engageforsuccess.org/wp-content/uploads/2015/09/Rob-Briner.pdf

    Of course, one study doesn't tell you anything much anyway - so what you'd also take into account is how many studies are around that meet these design criteria and what their results, collectively, show

Children
No Data