What’s the deal with ‘evidence-based practice’? Aren’t we all evidence based?

To an extent, yes - we all use evidence. But as we argue in our positioning paper, In search of the best available evidence, there are two big things we tend to get wrong when using evidence to inform decisions.

First, we’re often not great at gauging the quality of evidence we’re looking at. There is a well-established hierarchy of scientific evidence on cause-and-effect relationships. The ‘gold standard’ is randomised controlled trials, which carry a good deal more weight than, say, simple before-and-after studies, and far more than surveys run at one point in time. If we can take note of this hierarchy in looking at evidence, we are well on the way to making more reliable, better decisions.

Second, we tend to cherry pick evidence that supports our pet theory. It feels great when you find a piece of research that confirms what you long suspected. But barring the ridiculous, the chances are you’ll be able to find research – even good quality research – to back your opinion whatever it is. To find out if a technique is really worth replicating, we should look at the wider body of evidence. So sitting above the hierarchy of single studies, we have a ‘platinum standard’ of systematic reviews and meta-analyses.

Evidence-based HR means being anti-fad and willing to face uncomfortable truths. But it’s hugely important. Relying on weak evidence can feel politically expedient (staying in line with what your boss expects to see) or compelling (in tune with the zeitgeist, intuitively right), yet at its worst it gives little more than a 50% chance of success, the equivalent of flipping a coin. If a decision is important, it’s worth testing the available evidence in this light: how much more scientific is it than a coin toss?

There are plenty of thorny questions in evidence-based HR but the basic principles are pretty simple and more importantly, worth striving for. Our hope is that this forum will help put people these principles to work and grapple with the challenges. Thoughts?

  • Thanks Jonny, I haven't seen the Ted Talk - one for the commute home. I had seen Kahneman's mea culpa (#mancrush). What a guy ;)

    I reckon you're spot on - appealing and do-able are great words (as are desirable and feasible from the Rubicon Model). I also reckon the tendency of evidence based peeps is to lean towards the latter (i.e tools and resources), and the tendency of less credible solutions is to lean towards the former (for obvious reasons).

    So, 'appealing'... how far do you go? Likening 'test, learn and adapt' methodology to the agility of startup cultures that big corporates so admire? Tie up with publications like HBR? Go full on with branding academic research like this from Francesa Gino... hbr.org/.../let-your-workers-rebel.

    I know this will make many academics squirm, but I also supsect that practioners subscribing to the continuum approach may sympathise. I saw a criticism of an IOPsych approach as 'old wine in new bottles' recently, and thought, 'if the old wine is good and the new bottles sell more of it, what's the problem?'

    So maybe the 10 questions will work, but maybe that offers an alternative answer. What are the solutions that EBHR are most confident about? Which will be easiest to land in a big organisation? How can we repackage those in the sweet spot to be attractive to a practioner audience?
  • Another Kahneman fanboi, here. Great to see James raise (and Jonny acknowledge) his important contribution to this field, especially in commercial HR.

    I recently sat down with Jacques Quinio of Manpower Group who's an evangelist for EBHR and has some great insights on how organizations can make practical use of big data, but I challenged him a bit on how much harder it is for SMEs to acquire people data on the scales necessary to make really evidence-based decisions. Both in terms of the quantity of data we have available and the time we have within which decisions have to be taken, we don't have that luxury. HR practitioners in SMEs, with small teams, most of whose time is occupied with transactional pratice, have to straddle the line between data-led and instinct-led decision-making and Kahneman is our guru for this.

    Our instincts are fallible, but time spent considering the available evidence will inform and improve our instinctive decision-making.

    I'd take issue with Jonny's assertion that "it gives little more than a 50% chance of success". Not on the 50% figure (I'll take that), but on the idea that HR decision are fail/success binary options. We aren't financial market traders for whom the buy/sell decision is a straightforward win/lose binary state. Our decisions are far more nuanced. Do we hire X or hire Y? If we make the "wrong" decision, we still end up with a qualified, capable employee (most of the time) but just one who might not have been as good as the alternative.

    That's not to say that HR is incapable of making business-breaking decisions, but they are usually at the tail-end of a series of failures made at executive level (q.v. bhs).

    As a parting thought, if we assume that EBHR is the pathway to the most effective decision making, to what extent will it therefore be possible for even the most nuanced and sophisticated HR decisions to be automated, given sufficient data of the appropriate quality?
  • Robey said:

    "...if we assume that EBHR is the pathway to the most effective decision making, to what extent will it therefore be possible for even the most nuanced and sophisticated HR decisions to be automated, given sufficient data of the appropriate quality?"

    That is a question, Robey.

    I recall having this exchange about 'empathy' with Peter Cheese and (deleted blog).

  • So as we often argue when training and teaching is that EBP is definitely not about certainly but rather trying to reduce uncertainty. The more we find about something the more we realize (usually) that our we know less than we thought.

    I don't think there's any need for a compromise between progress and rigor - EBP is about using the best available evidence and being clear about the quality of that evidence. So you can still do stuff without much if any good quality evidence - but the point is you know that's what you're doing.

    And more generally, in relation to 'progress', clearly the more better quality evidence you have better-informed your analysis and action is likely to be and the more progress you'll make. What feels like progress may sometimes turn out to be going backwards.
  • I think academics should do more - the problem is we are completely disincentivized to do this. All that counts is publishing new papers and new research.

    And as for knowledge transfer partnerships these are often about conducting new research and new data collection - which is not what EBP is about. Rather, in relation to the scientific evidence part of EBP, we always start with systematic reviews of the existing evidence-base not new research.
  • "Relying on weak evidence can feel politically expedient (staying in line with what your boss expects to see) or compelling (in tune with the zeitgeist, intuitively right), yet at its worst it gives little more than a 50% chance of success, the equivalent of flipping a coin." Out of curiosity, what is the evidence for this?
  • Thanks for clarifying that Rob, that's interesting.

    Feels like a gap does exist there though, pre-EBP, to be the catalyst for establishing more and good quality evidence.
  • Great question! The below is commonly used to assess how likely evidence from different research methods is to predict a cause-and-effect relationship. So long as it's good quality - if the method is followed poorly, the trustworthiness gets downgraded. So, cross-sectional surveys with a few serious flaws, or qualitative case studies with a couple, are little better at predicting your future than tossing a coin. If you're interested, have a look at this practical research guide https://www.cebma.org/wp-content/uploads/CEBMa-REA-Guideline.pdf or this paper for a deeper read on the thinking & evidence behind this sort of thing http://www.cebma.org/wp-content/uploads/rousseau-et-al-evidence-in-management-and-org-science.pdf . 

    Rob B (and Eric Barends too if you're there) - what's the reference for this table? Is it the Petticrew or Shadish books? 

    Methodological Appropriateness For Cause-and-Effect Questions 
    Design Appropriateness Level Trustworthiness
    Systematic review or meta-analysis of randomized controlled studies Very high A+ 95%
    Systematic review or meta-analysis of controlled and/or before-after studies High A 90%
    Randomized controlled study High A 90%
    Systematic review or meta-analysis of cross-sectional studies Moderate B 80%
    Non randomized controlled before-after study Moderate B 80%
    Interrupted time series study Moderate B 80%
    Controlled study without a pretest or uncontrolled study with a pretest Limited C 70%
    Cross-sectional study Low D 60%
    Qualitative study Very Low D- 55%
  • You've pushed the boundaries of 'formatting' with this post, Jonny! ;)
  • Totally agree with all.

    The better quality evidence you have the better informed your analysis and action is likely to be and the more progress you'll make.

    But how do you encourage practitioners to adopt that approach? By changing their minds? By giving them tools and resources? It's very 'System 2'...

    Should that great work be bolstered with the sorts of techniques management consultancies or national governments are using to sell us their ideas or change our behaviours? When we know these are more likely to have an effect?

    That's the compromise I'm referring to, and it's the bit I think will make academics nervous. Maybe rightly so, maybe not?