I can’t use any old piece of evidence which has an abstract and a few relevant keywords to base my clinical actions on.
It could be biased (such as being funded by someone with an interest in the study’s outcome), not statistically significant, methodically flawed or irrelevant to the precise patient problem I am investigating. I could be challenged by a well-informed ‘expert patient’ who has free and easy access to information themselves via the internet, so I need to make sure I can back my decisions up.
Instead I’ll need to identify the patient’s problem, find relevant studies, critically evaluate them, and then apply them to the problem taking into account the patient’s individual needs.
Research + Clinical Expertise + Patient Preference = EBP
The process of EBP has five steps, although Melnyk (2010) later added two additional ones shown in italics below. It should also be noted that sometimes ‘Health Service Restrictions’ are included in the above formula (DiCenso et al 1998), meaning that limitations due to resource cost/access are taken into account as part of pragmatic reasoning.
0. CULTIVATE SPIRIT OF INQUIRY essential starting point
1. ASK questions that are answerable!
2. ACQUIRE search for the best evidence from the research available
3. APPRAISE critically appraise/evaluate the evidence – is it relevant, valid, reliable, applicable to you clinical question?
4. APPLY integrate the evidence with clinical expertise and patient’s preferences and values, and then implement it
5. ASSESS evaluate and reflect on the outcomes of your decision
6. DISSEMINATE EBP RESULTS share good practice and support other healthcare professionals
Step 1 is to create a good research question. Use the PICOT tool for biomedical quantitative-diagnosis type questions, SPICE for intervention qualitative-evidence type questions, or ECLIPSE for health service questions. Examples of questions produced from these tools can be found here.
Step 2 is acquiring evidence. Some good places to acquire evidence are:
http://www.medicine.ox.ac.uk/bandolier/index.html Oxford scientists review the available evidence for certain problems, then provide summary points on what was effective
http://www.nice.org.uk/about/what-we-do/evidence-services/journals-and-databases NICE (National Institute of Clinical Excellence) approved database of journals
http://www.cebm.net/ Centre for Evidence Based Medicine is based at Oxford and carries out high quality research, and has teaching guidance on carrying out EBP itself
http://www.cochrane.org/evidence Groups of healthcare professionals who gather and summarise research evidence, free from conflicts of interest and commercial sponsorship
https://www.ebscohost.com/nursing/products/cinahl-databases Databases such as CINAHL or PUBMED or OTSeeker allow you to search through 1000’s of studies and articles via search terms.
When acquiring your evidence, it can be useful to refer to a ‘hierarchy of evidence’ such as Sackett’s (1997) model, which grades different forms of research at different levels according to their relative robustness or authority as research evidence (ie how much you should trust the findings). Looking at a piece of evidence and not sure where it falls on the pyramid below? See here for flowchart to help you decide.
Sackett’s is not the only hierarchy however and different versions have been produced that specialise in grading research for different, specific areas (such as prevention, prognosis, harm or economics). Both /and Evans (1993) expressed concern that Sackett’s hierarchy was biased towards biomedical research and randomised control tests (RCTs) in particular, since they were touted as being the ‘gold standard’ of all evidence but they in fact weren’t well suited to some forms of evidence like therapeutic interventions.
Walker & Sofaer (1993) went on to propose that RCTs might be good and well for biomedical hypotheses (eg obtaining results on the efficacy of a certain drug), but weren’t the best form of research for obtaining evidence on more therapeutic or psycho-social interventions. This is because of reasons such as that it’s impossible to carry out therapeutic interventions ‘blind’ to remove bias, randomisation (where subject’s preference for therapist can’t be taken into account) may lead to increased subject attrition, or that outcome measurement should include clinical outcomes as well as functional outcomes, patient satisfaction, costs etc.
Psycho-social evidence is much more likely to be of use to an OT practitioner. Therefore as an OT it may be helpful to consider additional models such as the hierarchies by Evans (1993) or Jonas (2001) which grade research for reliability related to more psycho-social outcomes (such as patient experience or appropriateness) or economic ones (feasibility: the practicality of resources and costs needed to provide the treatment). We can see that RCTs may be the gold standard of research if all you are interested in in treatment effectiveness, but if you are looking at the bigger picture in the real world, other forms are equally important to refer to. In addition, regardless of the type of research, if the study itself is carried out poorly then the results will never be helpful so you should always quality-assess any research article you are reviewing.
Step 3 is appraising evidence. Aveyard et al (2011) suggested six key questions to ask yourself when structuring your critical thinking about a piece of evidence:
- Who is telling me this? Organisation/individual? Expert? Do they have bias? How do you know?
- Where did this information come from? Where published/found? Come across it or systematic search?
- When was this written? Older key info may be valid but check if more recent work.
- Why has this been written? Who information aimed at? Aim of the info?
- What is being said? What is key message? Research study, professional opinion, discussion, website etc?
- How did they write this? Is line of reasoning understandable? How did author come to conclusion? If research, how well carried out/do conclusions reflect the results?
There are also tools such as CASP, CRAAP test (for websites) or AGREE for guidelines & protocols (Brouwers et al 2010) that you can use to help structure your critical appraisal. For an example of a website review performed using the CRAAP test principles, see here.
It is often the case that any research that gets published (or even just submitted for approval) is biassed towards those with positive results (Dickersin 1990, Easterbrook et al 1991). Therefore when conducting a literature search it’s important to try and identify (or record your strategy for trying to identify, even if none is found) any negative research evidence to provide balance to your argument.
Aveyard, H, Sharp, P. & Wooliams, M. (2011). A Beginner’s Guide to Critical Thinking and Writing in Health and Social Care. Maidenhead: McGraw Hill Open University Press.
Booth, A (2004) Formulating answerable questions. In Booth A & Brice A (Editors) Evidence based practice for information professionals: A handbook (pp61-70) London: Facet Publishing [SPICE]
Brouwers, M., Kho, M.E., Browman, G.P., Cluzeau, F., Feder, G., Fervers, B., Hanna, S., Makarski, J. on behalf of Next Steps Consortium (2010) AGREE II: Advancing guideline development, reporting and evaluation in healthcare. Canadian Medical Association Journal, Dec 2010, 182:E839-842 [AGREE]
DiCenso A, Cullum N & Ciliska D (1998) Implementing evidence-based nursing: some misconceptions. Evidence-Based Nursing 1 (1), 38-40
Dickersin K (1990) The existence of publication bias and the risk factors for its
occurrence. Journal of the American Medical Association 263, 1385-89
Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR (1991) Publication
bias in clinical research. Lancet 337, 867-72
Evans, D. (2003). Hierarchy of evidence: A framework for ranking evidence evaluating healthcare interventions. Journal of Clinical Nursing 2003:12 77-84
Jonas, W. B. (2001). The evidence house: How to build an inclusive base for complementary medicine. Western Journal of Medicine, 2001 175(2), 79–80.
Melnyk, B. M., Fineout-Overholt, E., Stillwell, S. B., & Williamson, K. M. (2010). Evidence-based practice: Step by step: The seven steps of evidence-based practice. The American Journal of Nursing, 110(1), 51-53. Available at: http://www.nursingcenter.com/nursingcenter_redesign/media/EBP/AJNseries/SevenSteps.pdf [accessed 11.11.15]
Phillips B, Ball C, Sackett D, Badenoch D, Straus S, Haynes B, Dawes M (2001) Oxford Centre for Evidence-Based Medicine levels of evidence (May 2001). Oxford: Centre for Evidence-Based Medicine. Available at: http://www.cebm.net/oxford-centre-evidence-based-medicine-levels-evidence-march-2009/ [accessed 12.11.15]
Riva JJ, Malik KMP, Burnie SJ, Endicott AR, Busse JW. What is your research question? An introduction to the PICOT format for clinicians. The Journal of the Canadian Chiropractic Association. 2012;56(3):167-171. [PICOT]
Sackett DL, Richardson WS, Rosenberg WM, Haynes RB (1997) Evidence Based Medicine: How to practice and teach EBM New York: Churchill Livingstone
Walker, J. & Sofaer, B. (2003). Randomised controlled trials in the evaluation of non-biomedical therapeutic interventions for pain: The gold standard? Nursing Times Research 8(5): 317 – 329
Wildridge, V., & Bell, L. (2002). How CLIP became ECLIPSE: A mnemonic to assist in searching for health policy/management information. Health Information and Libraries Journal, 19(2), 113-115 [ECLIPSE]