Could it be that occupational psychologists (OP) draw on research evidence that is less valid and reliable than a movie review site found online? Could consultants be promoting psychological interventions that may sound appealing, look good on paper but hold limited to zero evidence supporting validity, reliability or business impact? Professor Rob Briner from the School of Management, University of Bath and Vice-chair of the Centre for Evidence Based Management posed these and other questions during a Glow at Work master class session on evidence-based organisational psychology.
The use of the term evidence-based emerged in the 1990s within medicine but the principle has extended across other disciplines such as education, public policy and business management. Within business psychology, an evidence-based approach means basing management decisions through a combination of critical thinking and the most valid and relevant ‘evidence’ (information rooted not only in scientific research but also, expertise, internal business information and even personal experience).
Seems obvious, no?
In reality, not quite.
Four main problem areas were identified.
Clients may not be interested: multiple factors sway HR practices away from an evidence-based approach. In addition to cognitive biases and decision-making errors, senior managers are prone to ‘faddism’ or adopting business practices without a solid intellectual foundation (Donaldson & Hilmer, 1998). Internal politics also plays a role. Do senior managers actually achieve their status within an organisation because of accuracy and findings or because of action and speed? Is political clout valued more than a rigorous scientific approach? In many cases the answer is most likely yes.
Don’t trust the Academics: they have extensive vested interests, are inevitably biased and in reality hold limited knowledge. A survey of US based business school academics asked respondents whether they knew of faculty engaging in different types of academic misconduct – affirmative answers ranged from 50%-90% depending on misconduct behaviour cited (Bedeian et al., 2010). And if that wasn’t enough, many academics engage in dubious scientific practice and behaviour (Kepes & McDaniel, 2013).
Some other interesting highlights include:
- Academics can’t really agree on much: a 24 item questionnaire survey of 75 OP professors asked that participants indicate where they saw good evidence supporting fundamental findings in organizational psychology – this yielded only 75%+ participant agreement on seven questions (Guest & Zijlstra, 2012)
- Publishing only positive findings: negative research results tend not to get published. In addition, the percentage of hypothesis supporting articles published in journals has risen from approximately 70% to 85% in the period 1990 to 2007 (Fanelli, 2012)
- HARKing or creating hypotheses after results are known. Doing science backwards or developing hypothesis after some preliminary analysis in order to ensure alignment with results
- Null Hypothesis Significance Testing: Does failure to reject the null hypothesis really mean the null hypothesis is supported? And does a so-called statistically significant finding support the alternative hypothesis? Why are .01 and .05 drawn as arbitrary lines of significance and therefore of interest? And what about the issue of a large enough sample size delivering an inevitably significant result?
A lack of systematic reviews: the aim of a systematic review is to identify all relevant studies on a specific topic and to select appropriate studies based on explicit criteria. These studies are then assessed to ascertain their internal validity. Unfortunately, we just don’t have many of these reviews in management science or organisational psychology.
Teaching practices are not evidence based (Goodman & O’Brien, 2012): indeed the focus of teaching in most business and OP settings is on student satisfaction.
Rob challenged us to consider what impact on people and organisations our professional practice actually has, and what impact we want it to have. Rob emphasised that evidence-based practice was a long-term professional commitment that implied being prepared to give up on cherished beliefs. It would also mean finding clients who would be prepared to accept such an approach and in practice this might mean turning away assignments with organisations not prepared to do so.
In adopting evidence-based practice, Rob suggested making decisions that were conscientious, explicit, judicious and based on different information sources. Four types of information were suggested for consideration:
- Evaluated external evidence: what does systematically reviewed evidence suggest? What effective interventions feature in the research? Are they relevant?
- Practitioner experiences and judgements: have you seen the problem before? What are your hunches? What has worked in the past and how is this situation different?
- Local context evidence: what is happening and how do managers perceive it? What do they think about it? How do managers view the costs/benefits of a potential intervention?
- Stakeholders: the perspectives of those who may be affected by the intervention. How do they feel? What upside or downside do they see with the proposed plan of action?
Overall, a very entertaining and informative talk that highlighted why nothing should be taken for granted and that a rigorous evidence-based approach can help support better decision-making and management practice.
Michael Webster – email@example.com
The good but depressing book about consumerism and higher education – http://ukcatalogue.oup.com/product/9780199660940.do
Center for Evidence-Based Management website http://www.cebma.org/