Author:
Bakken S.,Wrenn J.O.,Siegler E.L.,Stetson P.D.
Abstract
SummaryObjective: To refine the Physician Documentation Quality Instrument (PDQI) and test the validity and reliability of the 9-item version (PDQI-9).Methods: Three sets each of admission notes, progress notes and discharge summaries were evaluated by two groups of physicians using the PDQI-9 and an overall general assessment: one gold standard group consisting of program or assistant program directors (n = 7), and the other of attending physicians or chief residents (n = 24). The main measures were criterion-related validity (correlation coefficients between Total PDQI-9 scores and 1-item General Impression scores for each note), discriminant validity (comparison of PDQI-9 scores on notes rated as best and worst using 1-item General Impression score), internal consistency reliability (Cronbach’s alpha), and inter-rater reliability (intraclass correlation coefficient (ICC)).Results: The results were criterion-related validity (r = –0.678 to 0.856), discriminant validity (best versus worst note, t = 9.3, p = 0.003), internal consistency reliability (Cronbach’s alphas = 0.87–0.94), and inter-rater reliability (ICC = 0.83, CI = 0.72–0.91).Conclusion: The results support the criterion-related and discriminant validity, internal consistency reliability, and inter-rater reliability of the PDQI-9 for rating the quality of electronic physician notes. Tools for assessing note redundancy are required to complement use of PDQI-9. Trials of the PDQI-9 at other institutions, of different size, using different EHRs, and incorporating additional physician specialties and notes of other healthcare providers are needed to confirm its generaliz-ability.
Subject
Health Information Management,Computer Science Applications,Health Informatics
Cited by
85 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献