A survey of experts to identify methods to detect problematic studies: Stage 1 of the INSPECT-SR Project
Author:
Wilkinson JackORCID, Heal Calvin, Antoniou George A, Flemyng Ella, Avenell Alison, Barbour Virginia, Bordewijk Esmee M, Brown Nicholas J L, Clarke Mike, Dumville Jo, Grohmann Steph, Gurrin Lyle C., Hayden Jill A, Hunter Kylie E, Lam Emily, Lasserson Toby, Li Tianjing, Lensen Sarah, Liu JianpingORCID, Lundh Andreas, Meyerowitz-Katz Gideon, Mol Ben W, O’Connell Neil E, Parker Lisa, Redman Barbara, Seidler Anna Lene, Sheldrick Kyle, Sydenham Emma, Dahly Darren L, van Wely Madelon, Bero Lisa, Kirkham Jamie JORCID
Abstract
AbstractBackgroundRandomised controlled trials (RCTs) inform healthcare decisions. Unfortunately, some published RCTs contain false data, and some appear to have been entirely fabricated. Systematic reviews are performed to identify and synthesise all RCTs which have been conducted on a given topic. This means that any of these ‘problematic studies’ are likely to be included, but there are no agreed methods for identifying them. The INSPECT-SR project is developing a tool to identify problematic RCTs in systematic reviews of healthcare-related interventions. The tool will guide the user through a series of ‘checks’ to determine a study’s authenticity. The first objective in the development process is to assemble a comprehensive list of checks to consider for inclusion.MethodsWe assembled an initial list of checks for assessing the authenticity of research studies, with no restriction to RCTs, and categorised these into five domains: Inspecting results in the paper; Inspecting the research team; Inspecting conduct, governance, and transparency; Inspecting text and publication details; Inspecting the individual participant data. We implemented this list as an online survey, and invited people with expertise and experience of assessing potentially problematic studies to participate through professional networks and online forums. Participants were invited to provide feedback on the checks on the list, and were asked to describe any additional checks they knew of, which were not featured in the list.ResultsExtensive feedback on an initial list of 102 checks was provided by 71 participants based in 16 countries across five continents. Fourteen new checks were proposed across the five domains, and suggestions were made to reword checks on the initial list. An updated list of checks was constructed, comprising 116 checks. Many participants expressed a lack of familiarity with statistical checks, and emphasized the importance of feasibility of the tool.ConclusionsA comprehensive list of trustworthiness checks has been produced. The checks will be evaluated to determine which should be included in the INSPECT-SR tool.
Publisher
Cold Spring Harbor Laboratory
|
|