Evaluation tool of FAIR criteria literacy and compliance to foster research data sharing
Romain David - INRA, Montpellier SupAgro, Université de Montpellier, Laurence Mabile - INSERM-Université Paul Sabatier Toulouse III, Mohamed Yahia - INIST-CNRS, Mogens Thomsen and Anne Cambon-Thomsen - INSERM-Université Paul Sabatier Toulouse III
The RDA-SHARC (SHAring Reward & Credit) interest group is an interdisciplinary group endorsed by Research Data Alliance. It intends to improve crediting and rewarding mechanisms for scientists who work towards sharing their data for potential reuse. In this perspective, assessing the compliance with FAIR practices and increasing the understanding of FAIRness criteria are critical steps.
To that aim, a FAIR criteria assessment survey has been launched to seek feedback from the scientific community on intelligible, realistic and human-readable assessment criteria that could help guiding the scientist to follow FAIR practices as much as possible. It should also help the evaluator to objectively achieve his/her task.
The criteria to be assessed by the respondents to the survey are organised in 5 groups. In addition to ‘Findable’, ‘Accessible’, ‘Interoperable’ and ‘Reusable’ criteria we have included ‘Motivations for Sharing’. For each criterion, 4 choices are proposed (‘Never / Not Assessable’; ‘If mandatory only’; ‘Sometimes’; ‘Always’). One choice and only one, must be ticked for each criterion. The final score consists of the sum of the number of each ticked degree compare to the total number of criteria in each group; the ‘sharing motivations’ are appreciated qualitatively in the final interpretation.
This survey is aimed at gathering comments to build realistic and usable tools for scientists and evaluators. It will help to construct an evaluation tool for FAIR practices that will enable to consider data sharing activities as a valuable research output and will foster this activity by promoting fairness literacy and dedicated support for data management.
The poster will present the first results of the survey. Special emphasis will be put on highlighting the differences in understanding the human readable FAIR compliance criteria between different users (researchers and evaluators, young and senior researchers, IT people and others…).