GRAIL: Codesigning responsible uses of AI in research funding and evaluation (RoRI Working Paper No. 13)
The current rapid pace of change in artificial intelligence (AI) technologies and their proliferation throughout daily life seems poised to profoundly transform research, with some asking if AI advances signal the ‘end of science.’ AI and machine learning (ML) have been successfully used for decades as powerful tools in research, but recent advances in the accessibility of AI tools and the availability of vast amounts of scientific data have accelerated the pace of change.
For research funders, as stewards of the research ecosystem, AI and ML present unique pressures. AI is regarded by many as a “general purpose technology” with the capacity to boost productivity and transform working practices across entire economies, and with particular opportunities in knowledge-focused sectors such as research. AI and ML present significant opportunities for enhancing the knowledge work of research practice and research funding, and pose equally significant dilemmas and uncertainties around the changing nature of scientific knowledge, risks to reliability and validity of research, and the shape of good scientific practice in an “AI everywhere” world.
In 2023, RoRI launched the Getting Responsible about AI and machine Learning in research funding and evaluation (GRAIL) project in partnership with an international consortium of research funders. GRAIL is filling the need for new research evidence on effective strategies for responsible and successful use of AI/ML in application contexts like research funding, and for practical guidelines and resources to guide funders in adopting best practices for designing, using, and evaluating AI/ML tools in their unique contexts.