Comparative Effectiveness of Tumor Response Assessment Methods: Standard of Care Versus Computer-Assisted Response Evaluation

Author:

Allen Brian C.1,Florez Edward1,Sirous Reza1,Lirette Seth T.1,Griswold Michael1,Remer Erick M.1,Wang Zhen J.1,Bieszczad Jacob E.1,Cox Kelly L.1,Goenka Ajit H.1,Howard-Claudio Candace M.1,Kang Hyunseon C.1,Nandwana Sadhna B.1,Sanyal Rupan1,Shinagare Atul B.1,Henegan J. Clark1,Storrs Judd1,Davenport Matthew S.1,Ganeshan Balaji1,Vasanji Amit1,Rini Brian1,Smith Andrew D.1

Affiliation:

1. Brian C. Allen, Duke University Medical Center, Durham, NC; Edward Florez, Reza Sirous, Seth T. Lirette, Michael Griswold, Candace M. Howard-Claudio, J. Clark Henegan, Judd Storrs, and Andrew D. Smith, University of Mississippi Medical Center, Jackson, MS; Erick M. Remer and Brian Rini, The Cleveland Clinic; Amit Vasanji, ImageIQ, Cleveland; Jacob E. Bieszczad, University of Toledo Medical Center, Toledo, OH; Zhen J. Wang, University of California at San Francisco Medical Center, San Francisco, CA; Kelly...

Abstract

Purpose To compare the effectiveness of metastatic tumor response evaluation with computed tomography using computer-assisted versus manual methods. Materials and Methods In this institutional review board–approved, Health Insurance Portability and Accountability Act–compliant retrospective study, 11 readers from 10 different institutions independently categorized tumor response according to three different therapeutic response criteria by using paired baseline and initial post-therapy computed tomography studies from 20 randomly selected patients with metastatic renal cell carcinoma who were treated with sunitinib as part of a completed phase III multi-institutional study. Images were evaluated with a manual tumor response evaluation method (standard of care) and with computer-assisted response evaluation (CARE) that included stepwise guidance, interactive error identification and correction methods, automated tumor metric extraction, calculations, response categorization, and data and image archiving. A crossover design, patient randomization, and 2-week washout period were used to reduce recall bias. Comparative effectiveness metrics included error rate and mean patient evaluation time. Results The standard-of-care method, on average, was associated with one or more errors in 30.5% (6.1 of 20) of patients, whereas CARE had a 0.0% (0.0 of 20) error rate ( P < .001). The most common errors were related to data transfer and arithmetic calculation. In patients with errors, the median number of error types was 1 (range, 1 to 3). Mean patient evaluation time with CARE was twice as fast as the standard-of-care method (6.4 minutes v 13.1 minutes; P < .001). Conclusion CARE reduced errors and time of evaluation, which indicated better overall effectiveness than manual tumor response evaluation methods that are the current standard of care.

Publisher

American Society of Clinical Oncology (ASCO)

Subject

General Medicine

Cited by 2 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3