FAIRshake: toolkit to evaluate the findability, accessibility, interoperability, and reusability of research digital resources
Author:
Clarke Daniel J. B.ORCID, Wang Lily, Jones Alex, Wojciechowicz Megan L.ORCID, Torre Denis, Jagodnik Kathleen M.ORCID, Jenkins Sherry L.ORCID, McQuilton PeterORCID, Flamholz Zachary, Silverstein Moshe C., Schilder Brian M.ORCID, Robasky KimberlyORCID, Castillo Claris, Idaszak Ray, Ahalt Stanley C., Williams JasonORCID, Schurer StephanORCID, Cooper Daniel J.ORCID, de Miranda Azevedo RicardoORCID, Klenk Juergen A.ORCID, Haendel Melissa A., Nedzel JaredORCID, Avillach Paul, Shimoyama Mary E.ORCID, Harris Rayna M., Gamble Meredith, Poten Rudy, Charbonneau Amanda L., Larkin JennieORCID, Brown C. TitusORCID, Bonazzi Vivien R.ORCID, Dumontier Michel J.ORCID, Sansone Susanna-AssuntaORCID, Ma’ayan AviORCID
Abstract
AbstractAs more datasets, tools, workflows, APIs, and other digital resources are produced by the research community, it is becoming increasingly difficult to harmonize and organize these efforts for maximal synergistic integrated utilization. The Findable, Accessible, Interoperable, and Reusable (FAIR) guiding principles have prompted many stakeholders to consider strategies for tackling this challenge by making these digital resources follow common standards and best practices so that they can become more integrated and organized. Faced with the question of how to make digital resources more FAIR, it has become imperative to measure what it means to be FAIR. The diversity of resources, communities, and stakeholders have different goals and use cases and this makes assessment of FAIRness particularly challenging. To begin resolving this challenge, the FAIRshake toolkit was developed to enable the establishment of community-driven FAIR metrics and rubrics paired with manual, semi- and fully-automated FAIR assessment capabilities. The FAIRshake toolkit contains a database that lists registered digital resources, with their associated metrics, rubrics, and assessments. The FAIRshake toolkit also has a browser extension and a bookmarklet that enables viewing and submitting assessments from any website. The FAIR assessment results are visualized as an insignia that can be viewed on the FAIRshake website, or embedded within hosting websites. Using FAIRshake, a variety of bioinformatics tools, datasets listed on dbGaP, APIs registered in SmartAPI, workflows in Dockstore, and other biomedical digital resources were manually and automatically assessed for FAIRness. In each case, the assessments revealed room for improvement, which prompted enhancements that significantly upgraded FAIRness scores of several digital resources.
Publisher
Cold Spring Harbor Laboratory
Reference18 articles.
1. Wilkinson, M.D. , Dumontier, M. , Aalbersberg, I.J. , Appleton, G. , Axton, M. , Baak, A. , Blomberg, N. , Boiten, J.-W. , da Silva Santos, L.B. and Bourne, P.E. (2016) The FAIR Guiding Principles for scientific data management and stewardship. Scientific data, 3. 2. Lassila, O. and Swick, R.R. (1998) Resource description framework (RDF) model and syntax specification. 3. FAIRsharing as a community approach to standards, repositories and policies;Nature biotechnology,2019 4. Cox, S. and Yu, J. (2017) OzNome 5-star Tool: A Rating System for making data FAIR and Trustable. eResearch Australasia 2017. 5. Dillo, I. and De Leeuw, L. (2014) Data Seal of Approval: Certification for sustainable and trusted data repositories. The Hague: Data Archiving and Networked Services (DANS), 2014, 20.
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|