Development of and Preliminary Validity Evidence for the EFeCT Feedback Scoring Tool

Author:

Ross Shelley1ORCID,Hamza Deena2ORCID,Zulla Rosslynn3ORCID,Stasiuk Samantha4ORCID,Nichols Darren5ORCID

Affiliation:

1. Shelley Ross, PhD, is Professor, Department of Family Medicine, University of Alberta, Edmonton, AB, Canada

2. Deena Hamza, PhD, is Competency-Based Medical Education Evaluation Lead for Postgraduate Medical Education, University of Alberta, Edmonton, AB, Canada

3. Rosslynn Zulla, PhD, is a Specialist/Advisor, Faculty of Social Work, University of Calgary, AB, Canada

4. Samantha Stasiuk, MD, MHPE, is Clinical Assistant Professor, Department of Family Practice, University of British Columbia, BC, Canada

5. Darren Nichols, MD, is Associate Professor, Department of Family Medicine, University of Alberta, Edmonton, AB, Canada

Abstract

ABSTRACT Background Narrative feedback, like verbal feedback, is essential to learning. Regardless of form, all feedback should be of high quality. This is becoming even more important as programs incorporate narrative feedback into the constellation of evidence used for summative decision-making. Continuously improving the quality of narrative feedback requires tools for evaluating it, and time to score. A tool is needed that does not require clinical educator expertise so scoring can be delegated to others. Objective To develop an evidence-based tool to evaluate the quality of documented feedback that could be reliably used by clinical educators and non-experts. Methods Following a literature review to identify elements of high-quality feedback, an expert consensus panel developed the scoring tool. Messick's unified concept of construct validity guided the collection of validity evidence throughout development and piloting (2013–2020). Results The Evaluation of Feedback Captured Tool (EFeCT) contains 5 categories considered to be essential elements of high-quality feedback. Preliminary validity evidence supports content, substantive, and consequential validity facets. Generalizability evidence supports that EFeCT scores assigned to feedback samples show consistent interrater reliability scores between raters across 5 sessions, regardless of level of medical education or clinical expertise (Session 1: n=3, ICC=0.94; Session 2: n=6, ICC=0.90; Session 3: n=5, ICC=0.91; Session 4: n=6, ICC=0.89; Session 5: n=6, ICC=0.92). Conclusions There is preliminary validity evidence for the EFeCT as a useful tool for scoring the quality of documented feedback captured on assessment forms. Generalizability evidence indicated comparable EFeCT scores by raters regardless of level of expertise.

Publisher

Journal of Graduate Medical Education

Subject

General Medicine

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3