56

What’s the deal with ‘evidence-based practice’? Aren’t we all evidence based?

Jonny

| 0 Posts

CIPD Staff

25 May, 2017 14:55

To an extent, yes - we all use evidence. But as we argue in our positioning paper, In search of the best available evidence, there are two big things we tend to get wrong when using evidence to inform decisions.

First, we’re often not great at gauging the quality of evidence we’re looking at. There is a well-established hierarchy of scientific evidence on cause-and-effect relationships. The ‘gold standard’ is randomised controlled trials, which carry a good deal more weight than, say, simple before-and-after studies, and far more than surveys run at one point in time. If we can take note of this hierarchy in looking at evidence, we are well on the way to making more reliable, better decisions.

Second, we tend to cherry pick evidence that supports our pet theory. It feels great when you find a piece of research that confirms what you long suspected. But barring the ridiculous, the chances are you’ll be able to find research – even good quality research – to back your opinion whatever it is. To find out if a technique is really worth replicating, we should look at the wider body of evidence. So sitting above the hierarchy of single studies, we have a ‘platinum standard’ of systematic reviews and meta-analyses.

Evidence-based HR means being anti-fad and willing to face uncomfortable truths. But it’s hugely important. Relying on weak evidence can feel politically expedient (staying in line with what your boss expects to see) or compelling (in tune with the zeitgeist, intuitively right), yet at its worst it gives little more than a 50% chance of success, the equivalent of flipping a coin. If a decision is important, it’s worth testing the available evidence in this light: how much more scientific is it than a coin toss?

There are plenty of thorny questions in evidence-based HR but the basic principles are pretty simple and more importantly, worth striving for. Our hope is that this forum will help put people these principles to work and grapple with the challenges. Thoughts?

4764 views
  • Steve Bridger

    | 0 Posts

    Community Manager

    30 May, 2017 08:15

    In reply to Jonny:

    What aspects of people management are most problematic for your organisation?
    On what basis are these identified as issues?
    What evidence would help you progress on these issues?

    Excellent questions, Jonny... and a very helpful lens through which I can view many of the questions and challenges posed elsewhere on this Community by practitioners. Also a very useful mindset for HR professionals to have when considering some of their own work projects and professional objectives - e.g. see this thread.

    A key challenge for me (and others) is to try to 'unlock' and curate the collective experience of why a particular action/set of actions/route was 'successful' (and less successful)... and explore how this might translate to the many different work contexts that are out there.

  • Steve Bridger

    | 0 Posts

    Community Manager

    30 May, 2017 08:18

    In reply to Charles:

    Perhaps we should ask the HR community what 10 pieces of evidence would be most useful for them and then pass this on to the research community?

    Now you've got me thinking...

  • In reply to Jonny:

    There's a great quote from Daniel Kahneman that I think goes some way to answering that question. He suggests evidenced based solutions will always struggle to land within the complexity of big business:

    "Experts who acknowledge the full extent of their ignorance may expect to be replaced by more confident competitors, who are better able to gain the trust of clients. An unbiased appreciation of uncertainty is a cornerstone of rationality--but it is not what people and organisations want."

    There's a difficult reality that follows for me - how much compromise (if any) should there be between progress and rigour/credibility? We can say none: we stick to the gold standard of RCT's and communicate with plainly presented objectivity. But we should recognise that the competition from non-evidence-based solutions is highly sophisticated and evolved to be 'sellable':

    Take the '10 questions' approach - the strength of non-EBHR is to answer them with everything we know people lean towards - certainty (this is the one answer that works), norms + loss aversion (everyone else is doing it, you're falling behind), messenger (this guru says it's right) etc etc. Understanding why it isn't easy to get traction with EBHR is partly about understanding why the alternative is so compelling...

    The question then evolves to 'how should we compete'? How can we be heard?' I imagine the following debate to be similar to a respected academic writing a best-selling psychology book... where's the line?
  • Jonny

    | 0 Posts

    CIPD Staff

    30 May, 2017 13:04

    In reply to James:

    Thanks James - insightful points and I agree, it's not enough to trash woolly thinking; we need to understand why people look to quick and overconfident solutions. This is a great Ted talk from Stuart Firestein on 'the pursuit of ignorance', which speaks to one way we might seek to change our mindsets: www.ted.com/.../stuart_firestein_the_pursuit_of_ignorance

    At a more practical level, I think it will be important to understand how EBHR can be made appealing and do-able (e.g. what are the tools or resources needed?). This is something for which the views of practitioners will be invaluable.

    Like you r point on popular science books. In my mind, it's pretty clear that, while there are defined methods one can follow in evidence-based practice, it's also a continuum. We can stretch ourselves to be MORE discerning of evidence, especially when it comes to major decisions or conclusions. Don't know if you saw it, but Kahneman himself gave a mea culpa on parts of his bestseller Thinking Fast & Slow, admitting “I placed too much faith in underpowered studies”. Fantastic lesson in honesty & humility. retractionwatch.com/.../
  • In reply to Rob:

    Feels like this is on a trajectory already Rob.
    I like the idea of seeking out the 10 most difficult problems or questions - maybe CIPD could survey members to determine.

    Another idea for helping EBP support HR, would be for the world of academia to lend their support and services directly to HR practitioners 'in the field' through knowledge transfer partnerships.
  • In reply to Jonny:

    Thanks Jonny, I haven't seen the Ted Talk - one for the commute home. I had seen Kahneman's mea culpa (#mancrush). What a guy ;)

    I reckon you're spot on - appealing and do-able are great words (as are desirable and feasible from the Rubicon Model). I also reckon the tendency of evidence based peeps is to lean towards the latter (i.e tools and resources), and the tendency of less credible solutions is to lean towards the former (for obvious reasons).

    So, 'appealing'... how far do you go? Likening 'test, learn and adapt' methodology to the agility of startup cultures that big corporates so admire? Tie up with publications like HBR? Go full on with branding academic research like this from Francesa Gino... hbr.org/.../let-your-workers-rebel.

    I know this will make many academics squirm, but I also supsect that practioners subscribing to the continuum approach may sympathise. I saw a criticism of an IOPsych approach as 'old wine in new bottles' recently, and thought, 'if the old wine is good and the new bottles sell more of it, what's the problem?'

    So maybe the 10 questions will work, but maybe that offers an alternative answer. What are the solutions that EBHR are most confident about? Which will be easiest to land in a big organisation? How can we repackage those in the sweet spot to be attractive to a practioner audience?
  • In reply to James:

    Another Kahneman fanboi, here. Great to see James raise (and Jonny acknowledge) his important contribution to this field, especially in commercial HR.

    I recently sat down with Jacques Quinio of Manpower Group who's an evangelist for EBHR and has some great insights on how organizations can make practical use of big data, but I challenged him a bit on how much harder it is for SMEs to acquire people data on the scales necessary to make really evidence-based decisions. Both in terms of the quantity of data we have available and the time we have within which decisions have to be taken, we don't have that luxury. HR practitioners in SMEs, with small teams, most of whose time is occupied with transactional pratice, have to straddle the line between data-led and instinct-led decision-making and Kahneman is our guru for this.

    Our instincts are fallible, but time spent considering the available evidence will inform and improve our instinctive decision-making.

    I'd take issue with Jonny's assertion that "it gives little more than a 50% chance of success". Not on the 50% figure (I'll take that), but on the idea that HR decision are fail/success binary options. We aren't financial market traders for whom the buy/sell decision is a straightforward win/lose binary state. Our decisions are far more nuanced. Do we hire X or hire Y? If we make the "wrong" decision, we still end up with a qualified, capable employee (most of the time) but just one who might not have been as good as the alternative.

    That's not to say that HR is incapable of making business-breaking decisions, but they are usually at the tail-end of a series of failures made at executive level (q.v. bhs).

    As a parting thought, if we assume that EBHR is the pathway to the most effective decision making, to what extent will it therefore be possible for even the most nuanced and sophisticated HR decisions to be automated, given sufficient data of the appropriate quality?
  • Steve Bridger

    | 0 Posts

    Community Manager

    31 May, 2017 15:24

    In reply to Robey:

    Robey said:

    "...if we assume that EBHR is the pathway to the most effective decision making, to what extent will it therefore be possible for even the most nuanced and sophisticated HR decisions to be automated, given sufficient data of the appropriate quality?"

    That is a question, Robey.

    I recall having this exchange about 'empathy' with Peter Cheese and (deleted blog).

  • In reply to James:

    So as we often argue when training and teaching is that EBP is definitely not about certainly but rather trying to reduce uncertainty. The more we find about something the more we realize (usually) that our we know less than we thought.

    I don't think there's any need for a compromise between progress and rigor - EBP is about using the best available evidence and being clear about the quality of that evidence. So you can still do stuff without much if any good quality evidence - but the point is you know that's what you're doing.

    And more generally, in relation to 'progress', clearly the more better quality evidence you have better-informed your analysis and action is likely to be and the more progress you'll make. What feels like progress may sometimes turn out to be going backwards.
  • In reply to Mark:

    I think academics should do more - the problem is we are completely disincentivized to do this. All that counts is publishing new papers and new research.

    And as for knowledge transfer partnerships these are often about conducting new research and new data collection - which is not what EBP is about. Rather, in relation to the scientific evidence part of EBP, we always start with systematic reviews of the existing evidence-base not new research.
  • "Relying on weak evidence can feel politically expedient (staying in line with what your boss expects to see) or compelling (in tune with the zeitgeist, intuitively right), yet at its worst it gives little more than a 50% chance of success, the equivalent of flipping a coin." Out of curiosity, what is the evidence for this?
  • In reply to Rob:

    Thanks for clarifying that Rob, that's interesting.

    Feels like a gap does exist there though, pre-EBP, to be the catalyst for establishing more and good quality evidence.
  • Jonny

    | 0 Posts

    CIPD Staff

    31 May, 2017 17:03

    In reply to Steven :

    Great question! The below is commonly used to assess how likely evidence from different research methods is to predict a cause-and-effect relationship. So long as it's good quality - if the method is followed poorly, the trustworthiness gets downgraded. So, cross-sectional surveys with a few serious flaws, or qualitative case studies with a couple, are little better at predicting your future than tossing a coin. If you're interested, have a look at this practical research guide https://www.cebma.org/wp-content/uploads/CEBMa-REA-Guideline.pdf or this paper for a deeper read on the thinking & evidence behind this sort of thing http://www.cebma.org/wp-content/uploads/rousseau-et-al-evidence-in-management-and-org-science.pdf . 

    Rob B (and Eric Barends too if you're there) - what's the reference for this table? Is it the Petticrew or Shadish books? 

    Methodological Appropriateness For Cause-and-Effect Questions 
    Design Appropriateness Level Trustworthiness
    Systematic review or meta-analysis of randomized controlled studies Very high A+ 95%
    Systematic review or meta-analysis of controlled and/or before-after studies High A 90%
    Randomized controlled study High A 90%
    Systematic review or meta-analysis of cross-sectional studies Moderate B 80%
    Non randomized controlled before-after study Moderate B 80%
    Interrupted time series study Moderate B 80%
    Controlled study without a pretest or uncontrolled study with a pretest Limited C 70%
    Cross-sectional study Low D 60%
    Qualitative study Very Low D- 55%
  • Steve Bridger

    | 0 Posts

    Community Manager

    31 May, 2017 17:26

    In reply to Jonny:

    You've pushed the boundaries of 'formatting' with this post, Jonny! ;)
  • In reply to Rob:

    Totally agree with all.

    The better quality evidence you have the better informed your analysis and action is likely to be and the more progress you'll make.

    But how do you encourage practitioners to adopt that approach? By changing their minds? By giving them tools and resources? It's very 'System 2'...

    Should that great work be bolstered with the sorts of techniques management consultancies or national governments are using to sell us their ideas or change our behaviours? When we know these are more likely to have an effect?

    That's the compromise I'm referring to, and it's the bit I think will make academics nervous. Maybe rightly so, maybe not?