Abstract
The scientific community's efforts have increased regarding the application and assessment of the FAIR principles on Digital Objects (DO) such as publications, datasets, or research software. Consequently, openly available automated FAIR assessment services have been working on standardization, such as FAIR enough, the FAIR evaluator or FAIRsFAIR's F-UJI. Digital Competence Centers such as University Libraries have been paramount in this process by facilitating a range of activities, such as awareness campaigns, trainings, or systematic support. However, in practice, using the FAIR assessment tools is still an intricate process for the average researcher. It requires a steep learning curve since it involves performing a series of manual processes requiring specific knowledge when learning the frameworks, disengaging some some researchers in the process.
We aim to use technology to close this gap and make this process more accessible by bringing the FAIR assessment to the researcher's profiles. We will develop "The FAIR extension", an open-source, user-friendly web browser extension that allows researchers to make FAIR assessment directly at the web source. Web browser extensions have been an accessible digital tool for libraries supporting scholarship (De Sarkar 2015). A remarkable example is the lightweight version of reference managers deployed as a browser service (Ferguson 2019). Moreover, it has been demonstrated that they can be a vehicle for open access, such as Lean Library Browser Extension.
The FAIR extension is a service that builds on top of the community-accepted FAIR evaluator APIs, i.e. it does not intend to create yet another FAIR assessment framework from scratch. The objective of the FAIR Digital Objects Framework (FDOF) is for objects published in a digital environment to comply with a set of requirements, such as identifiability, and the use of a rich metadata record (Santos 2021, Schultes and Wittenburg 2019). The FAIR extension will connect via REST-like operations to individual FAIR metrics test endpoints, according to Wilkinson et al. (2018), Wilkinson et al. (2019) and ultimately display the FAIR metrics on the client side (Fig. 1). Ultimately, the user will get FAIR scores of articles, datasets and other DOs in real-time on a web source, such as a scholarly platform or DO repository. With the possibility of creating simple reports of the assessment.
It is acknowledged that the development of web-based tools carries some constraints regarding platform versions releases, e.g. Chromium Development Calendar. Nevertheless, we are optimistic about the potential use cases. For example,
A student wanting to make use of a DO (e.g. software package), but doesn't know which to choose. The FAIR extension will indicate which one is more FAIR and aid the decision making process
A Data steward recommending sources
A researcher who wants to display all FAIR metrics of her DOs on a research profile
A PI that wants to evaluate an aggregated metric for a project.
A student wanting to make use of a DO (e.g. software package), but doesn't know which to choose. The FAIR extension will indicate which one is more FAIR and aid the decision making process
A Data steward recommending sources
A researcher who wants to display all FAIR metrics of her DOs on a research profile
A PI that wants to evaluate an aggregated metric for a project.
These use cases can be the means to bringing the open source community and FAIR DO interest groups to work together.