56

What’s the deal with ‘evidence-based practice’? Aren’t we all evidence based?

Jonny

| 0 Posts

CIPD Staff

25 May, 2017 14:55

To an extent, yes - we all use evidence. But as we argue in our positioning paper, In search of the best available evidence, there are two big things we tend to get wrong when using evidence to inform decisions.

First, we’re often not great at gauging the quality of evidence we’re looking at. There is a well-established hierarchy of scientific evidence on cause-and-effect relationships. The ‘gold standard’ is randomised controlled trials, which carry a good deal more weight than, say, simple before-and-after studies, and far more than surveys run at one point in time. If we can take note of this hierarchy in looking at evidence, we are well on the way to making more reliable, better decisions.

Second, we tend to cherry pick evidence that supports our pet theory. It feels great when you find a piece of research that confirms what you long suspected. But barring the ridiculous, the chances are you’ll be able to find research – even good quality research – to back your opinion whatever it is. To find out if a technique is really worth replicating, we should look at the wider body of evidence. So sitting above the hierarchy of single studies, we have a ‘platinum standard’ of systematic reviews and meta-analyses.

Evidence-based HR means being anti-fad and willing to face uncomfortable truths. But it’s hugely important. Relying on weak evidence can feel politically expedient (staying in line with what your boss expects to see) or compelling (in tune with the zeitgeist, intuitively right), yet at its worst it gives little more than a 50% chance of success, the equivalent of flipping a coin. If a decision is important, it’s worth testing the available evidence in this light: how much more scientific is it than a coin toss?

There are plenty of thorny questions in evidence-based HR but the basic principles are pretty simple and more importantly, worth striving for. Our hope is that this forum will help put people these principles to work and grapple with the challenges. Thoughts?

4766 views
  • Jonny

    | 0 Posts

    CIPD Staff

    31 May, 2017 17:34

    In reply to Robey:

    Nice comments. Making better decisions is the key thing. And not every problem need a meta-analysis to solve, let alone big data. But there are other things we can do on a smaller scale.
    So, the 50% argument is basically descriptive, to say that technically speaking, some evidence is no better at predicting outcomes than the toss of a coin. It's to recognise that we all like to latch on to evidence that supports our pet theory, especially if it tells an intuitive story, but don't kid yourself about being evidence based if the evidence is not trustworthy.
    It's a very different point from marginal gains and what decisions are important. Gary Klein has done some really interesting work on this - he argues that some decisions probably ARE best made by the toss of a coin, if you've eliminated the no-nos and are quibbling over minutiae. I enjoyed this book of his wordery.com/the-power-of-intuition-how-to-use-your-gut-feelings-to-make-better-decisions-at-work-gary-klein-9780385502894 . Incidentally, Klein wrote a paper with & gets name-checked by Kahneman in T,F&S. He's good, even if his book has a corny title.
    Don't think Kahneman is a champion of instinct-led decision-making, tho. If anything, the opposite!
    One other thing I'd flag is the different sources of evidence - in particular there's published scientific research as well organisational data and expertise. Someone will be posting more on that here soon I think...
  • Jonny

    | 0 Posts

    CIPD Staff

    31 May, 2017 17:36

    In reply to Steve Bridger:

    I just pasted it in, bing bang bosh. Surprised how smart it looked
  • In reply to Rob:

    I have a working example of 'reducing uncertainty'. I was recruited in part to progress our organisation's use of technology to support learning. The results of my initial research into its wider use by other organisations were highly inconclusive. There appears to be little reliable evidence to support the use of technology to enhance learning within organisations so I found myself drawing on reports from consultancies and literature from research on its use in academic contexts.
    This didn't stop us moving forward, it just meant my recommendation was to move forward with caution, namely to take small steps, gather data about it as we went and definitely not spend money on a bright new shiny LMS!
    After a year I've collected small amounts of data from a handful of experiments and other sources to identify what has worked well, and crucially what we could have reasonable success in trying to measure this year. From that I've made a proposal for how to progress over the next 12 months, which I'll do with a little (only a little) more certainty.
  • In reply to Jonny:

    Exactly. Evidence-based practice IS ABOUT PRACTICE - not about evidence or research. In other fields I get the impression that researchers tried to push or impose 'their' research onto practitioners. That doesn't work. Need to start with practice, practitioners, and real problems.
  • In reply to James:

    Hi James - yes - great point. Nobody 'disagrees' with evidence-based practice so, as you say, why is the alternative so compelling? I think in part its about the way practitioners get rewarded and the marketing ability of HR consultancies and others who push products and services onto HR.
  • In reply to Jonny:

    Well, that hierarchy is quite common and is specifically for looking at intervention studies.  In general I think it's much easier when thinking about cause and effect to consider how any study was conducted. So in addition to having good measures and a sensible sample and method these broadly speaking are the conditions you want to look for that tell you whether in principle causality can be inferred (not proven of course).  So if you were interested in finding studies that told you something about the causal relationship between engagement and performance, say, you'd look for studies in which:

    1. The change (if any) in engagement happened before  a change in performance.
    2. There is covariation between engagement and performance such that when engagement goes up, performance goes up - and when engagement goes down, performance goes down.
    3. There are no other plausible explanations of the relationship.  In this case it could be, for example, the 'quality' of line management that when 'good' both increases engagement and performance but engagement us causally unrelated to performance.

    Some more analysis of this example here: http://engageforsuccess.org/wp-content/uploads/2015/09/Rob-Briner.pdf

    Of course, one study doesn't tell you anything much anyway - so what you'd also take into account is how many studies are around that meet these design criteria and what their results, collectively, show

  • In reply to Robey:

    Hi Robey.  I'd love to know if Jacques Quinio is an evangelist for evidence-based HR in the way evidence-based practice is generally understood.  Does he just mean big data (which is not what evidence-based HR is about) or does he actually mean this:

  • In reply to Rob:

    Sorry that was way too small!

  • In reply to Rob:

    I'd hate to speak for Jacques on the basis of a 20-minute seminar and 15-minute face-to-face. Looking over my notes, though, his point on Big Data seems to have been less that it was the answer to all our woes and more that HR practitioners are ignoring its potential, seeing it as a sales/marketing tool rather than a people tool. If I look at the other topics we covered, and particularly at the approach to employee-led performance appraisal, I'd say that the overall push was in the direction of, but not identical to, the EBP infographic.

    For example, he was quite a fan of psychometrics as a tool for employee development, but my understanding of the literature on that subject from practising academics in neurology and psychology (as opposed to former academics in those fields who've developed a product to sell) is that psychometrics are basically pseudoscience.

    However, when dealing with any consultant - especially one at the top of his game - you always have to remember that they too have a product to sell.

    Your point about the emphasis on practice may be semantic, but it's very important and not a perspective I'd thought of before.
  • In reply to Liam:

    What a great example. We often think a big question - like 'how do we improve learning in our organisation'? - needs a big answer - like 'a £1m LMS system'. There's a great book our by Owain Service from The BIT, that aims to debunk just that...

    www.behaviouralinsights.co.uk/.../

    I bet there is plenty 'small steps' work out there that never gets captured or shared. Especially when competing with the glossy campaigns of big corporate HR and their consultancies.

    Perhaps you could outline one of your experiments, Liam? What approach did you take to measuring your interventions and their impact on learning? What were your results?

    Jonny/Rob - the salesman in me thinks that retelling and heroising some of these stories/examples alongside tools and resources might be impactful.
  • In reply to James:

    James I find your arguments compelling - here and above ('how should we compete'? How can we be heard?'). I think part of the challenge is being evidence-based ourselves.

    Most of CEBMa resources, tools and trainings available are based on the hard science of how people learn and build skills https://www.researchgate.net/publication/260178378_The_Science_of_Training_and_Development_in_Organizations_What_Matters_in_Practice 

    So we may not sell evidence itself with overstatements, but can we sell the path to understand and use better quality evidence with confidence over specific outcomes? I think we could begin defining what outcomes we aim to achieve, and at what level of analysis - maybe beginning with individual-level?

  • In reply to James:

    James I find your arguments compelling - here and above ('how should we compete'? How can we be heard?'). I think part of the challenge is being evidence-based ourselves.

    Most of CEBMa resources, tools and trainings available are based on the hard science of how people learn and build skills www.researchgate.net/.../260178378_The_Science_of_Training_and_Development_in_Organizations_What_Matters_in_Practice

    So we may not sell evidence itself with overstatements, but can we sell the path to understand and use better quality evidence with confidence over specific outcomes? I think we could begin defining what outcomes we aim to achieve, and at what level of analysis - maybe beginning with individual-level?
  • In reply to Liam:

    Exactly. In the absence of strong or reliable evidence you can still proceed but, as you say, more cautiously than if that wasn't the case.
  • In reply to Pietro:

    I agree with most but not the bit about defining outcomes. First you need evidence to define and identify problems (or opportunities)! In other words, I think that starting with 'outcomes' in mind (e.g., let's improve performance) can be very unhelpful and misleading. Clearly, those interested in evidence-based practice do not yet have compelling evidence that there is a 'problem' which EBP can help fix.
  • In reply to Robey:

    Thanks interesting points. I'm not sure I agree that psychometrics = psudoscience as there are so many different types of psychometrics with varying validity and reliabilty.

    Yes, it's ALL about practice and not about evidence as such. But it's also about doing stuff first to discover the evidence about what the real problems/opportunities may be and only then thinking about what you might do. Things like 'big data' are sold as sort of non-solutions to non-problems though of course data can help.