Subscribe to the STI 2023 mailing list to receive updates about the conference.
Presenters: Gareth O’Neil, Andrea Mannocci, Clifford Tatum, and Elizabeth Gadd
Chairs: Clara Calero and Ludo Waltman
Technical support: Zeynep Anli
Introduction
OPUS is an EU-funded project that aims to help reform research assessment toward a system that incentivises researchers to practice open science. GraspOS is another closely related EU-funded project. It aims to pave the way toward open-science-aware responsible research assessment and to deliver an open infrastructure for metrics and indicators that support modern research evaluation practices. As OPUS and GraspOS teams, we are joining forces by proposing to jointly organise a special session at the STI 2023 conference.
Topic and aims of the special session
The topic of this special session is research(er) evaluation at the interface of open science and responsible research assessment. This topic is at the heart of the OPUS and the GraspOS projects. It also relates directly to two of the core topics of the STI 2023 conference: open science practices and reform of research evaluation practices. These topics represent key areas of attention in ongoing policy discussions and emerging implementation initiatives, both in Europe and worldwide. We expect many of the conference participants to be interested in these discussions and the ongoing work in the OPUS and GraspOS projects. The proposed special session aims to inform conference participants about the ongoing work in the OPUS and GraspOS projects, solicit their feedback on this work, and invite them to get more closely involved by joining the communities of practice of the projects. Special attention will be given to the relationship between open science on the one hand and responsible research assessment on the other hand. Both represent powerful reform movements and, to a significant
extent, their agendas appear to align and mutually reinforce each other. Nevertheless, early observations of the teams working in both projects suggest that careful coordination between the two movements is needed to avoid the risk of tensions since, in some cases, their objectives may not be fully aligned (e.g., excessive focus on openness might draw attention away from other important issues in research assessment).
Format of the special session
This special session will feature speakers from the OPUS and GraspOS projects. In addition, we will invite a member of the steering board of the Coalition for Advancing Research Assessment (CoARA) to discuss how the projects can help members of CoARA to implement the commitments in the Agreement on Reforming Research Assessment. The special session will have an interactive format that offers ample opportunity for discussion with the speakers. Online participation will also be facilitated. We plan to use real-time polls (e.g., using Mentimeter) to maximise engagement with the audience.
Presenters: Angelika Tsivinskaya, Kaja Wendt, and Alessandro Avenali
Chair: Carey Ming-Li Chen
Academic careers in sociology and peer recognition
Temporary employment in the Norwegian higher education sector
Presenters: Puck van der Wouden and Nuria Bautista-Puig
Chair: Ed Noyons
Does research intensity reflect dental care demand?
The participation of public in knowledge production: a citizen science projects overview
Presenters: Mazen El Makkouk, Valentin Mulder, and Wencelao Arroyo-Machado
Chair: Angelo Salatino
Presenters: Theresa Burscher and Nico Pintar
Chair: Thomas Scherngell
Potentials for reducing spatial inequalities in innovation: A spatial econometric perspective
The impact of knowledge complexity on total factor productivity in European metropolitan regions
Presenters: Maria-Theresa Norn, Thed van Leeuwen, and Chantal Ripp
Chair: Marc Luwel
Gold Open Access output and expenditures in the United States in the past decade
Presenters: Pengfei Jia, Yifei Yu, and Yiwen Tang
Chair: Alfredo Yegros
Landscape of intellectual property protection in plant varieties: From a network view
Non-unilateral patents: A novel indicator for assessing innovation performance
Presenters: Lorena Delgado Quirós, Thomas Scheidsteger, Jose de Melo Maricato, and Angel Borrego
Chair: Rodrigo Costas
Comparing bibliographic descriptions in seven free-access databases
Presenters: Abhiru Nandy, Marc Luwel, Ioanna Grypari, and Rik Iping
Chair: Robin Haunschild
Analyzing the use of email addresses in scholarly publications
Project IntelComp: AI-assisted Research and Innovation Policy-Making
Presenters: Sotaro Shibayama, Henrique Pinheiro, Gege Lin, and Ross Potter
Chair: Vincent Traag
Quantifying citation inflation: measurement methods and their implications for research evaluation
Presenters: Jochen Glazer, Peter van den Besselaar, Kevin Boyack, and Rudiger Mutz
Chair: Jesper Schneider
A local cohesion-maximising algorithm for the exploration of publication networks
Field Effects in Predicting Exceptional Growth in Research Communities
Presenters: Angelo Salatino, Kamila Lewandowska, and Natalija Todorovic
Chair: Sybille Hinze
From STEM to STEAM? Exploring the connections between Arts and Sciences
Presenters: Kathrine Bjerg Bennike and Poul Melchiorsen, Eleanora Dagiene, Juergen Wastl, and Dmitry Kochetkov
Chair: James Wilsdon
An institutional implementation of the new European reform of research assessment
Why shouldn’t university rankings be used to evaluate research and researchers?
Presenters: Xin Liu, Mayline Strouk, and Julia Melkers
Chair: Renate Reitsma
Distance matters: The causal effect of coauthor mobility on scientific collaboration
Social innovation for resilience in international collaborative research
Presenters: Ligia Ernesto, Ekaterina Dyachenko, and Nikki Vermeulen
Chair: Robert Tijssen
How networked are Medical Schools? Evidence from Portugal
Evaluating international strategic partnerships between universities
Presenters: Philippe Gorry, Jiang Li, Rong Ni, and Hendrik Karlstrom
Chair: Zohreh Zahedi
Analysis of the Pubmed Commons Post-Publication Peer Review Platform
Is the acceptance time shorter for submission with preprints?
To Preprint or Not to Preprint: Experience and Attitudes of Researchers Worldwide
Presenters: Jochem Zuijderwijk, Carter Bloch, and Benedetto Lepori
Chair: Ronald Rousseau
Presenters: Jian Wang and Chun Li Liu
Chair: Eric James Iversen
Comparing patent front-page and in-text references to science
Measuring the science-technology-innovation linkage and its evolution based on citation and text features of FDA-approved drugs-patents-papers
Presenters: Annika Just, Juan Carlos Castillo, Juil Kim, and Nicolau Duran-Silva
Chair: Kathleen Gregory
Towards building a monitoring platform for a challenge-oriented smart specialisation with RIS3-MCAT
Synergies across Innovations Obstacles and the Role of Government Aid: Evidence from Chile
Presenters: Andre Brasil, Taekho You, and Raf Guns
Chair: Daniel Hook
Between Bibliometrics and Peer Review: The Evolution and Challenges of Brazil’s Qualis System
Presenters: Otto Auranen, Carlos Vílchez-Román, and Mark Simard
Chair: Benedetto Lepori
Presenters: Andrey Guskov, Kuniko Matsumoto, and Maria Henkel
Chair: Soohong Eum
Country shifts in the authorship of conference papers
Geographical distribution of high-novelty research
Who studies whom? An Analysis of Geo-Contextualized Sustainable Development Goal Research
Presenters: Mathias Wullum Nielsen, Luyu Du, Richard Woolley, and Huilin Ge
Chair: Jonathan Dudek
International mobility and career progression of European academics
Presenters: Dimity Stephen, Leslie McIntosh, and Tang Li
Chair: Carole de Bordes
Research Integrity Indicators in the Age of Artificial Intelligence
Research misconduct investigations in China’s science funding system
Presenters: Malik Salami, Er-Te Zheng, and Xiang Zheng
Chair: Zohreh Zahedi
Do men commit more scientific misconduct than women? Evidence from retracted articles
The effectiveness of peer review in identifying issues leading to retractions
Presenters: Shuying Chen, Marion Maisonobe, and Frederique Bordignon
Chair: Zohreh Zahedi
What does it mean to correct the scientific record? A case study of the JACS (2000-2023)
Retraction Practices and Effects: A Characterization and Quantification Study of Retraction Notice
Presenters: Hans Jonker, Biegzat Murat, and Robin Haunschild
Chair: Stephen Pinfield
A first snapshot of academics’ media mentions and policy citations in Flanders Belgium
Exploratory analysis of policy document sources in Altmetric.com and Overton
Presenters: Cibele Aguiar, Cornelia Lawson, and dangqiang Ye
Chair: Andrew Plume
LinkedIn use by academics: an indicator for science policy and research?
WeChat Presence of Chinese Scholarly Journal: An Analysis of CSCD-indexed Journals
Presenters: Autumn Toney, Remi Toupin, and Nicolas Robinson-Garcia
Chair: Philippe Mongeon
Impacts of Social Media Sentiments on Retractions of Scholarly Literature
Presenters: Ashraf Maleki, Madelaine Hare, and Daniel Torres-Salinas
Chair: Natalija Todorovic
Do You Cite What You Tweet? Contextualizing the Tweet-Citation Relationship
Presenters: Thomas Klebel, Kathleen Gregory, and Theresa Velden
Chair: Clara Calero Medina
Modelling the effect of funding selectivity on the uptake of data sharing in the academic community
Rewarding data sharing and reuse: Initial results of an interview study
Presenters: Andrew Herman, Jing Wang, and Jens-Peter Andersen
Chair: André Brasil
Editorial gatekeeping up and down the journal hierarchy
The journal attention economy in China
Through the Secret Gate: A Study of Member-Contributed Submissions in PNAS
Presenters: Victoria Pham, Daniel Torres-Salinas, Ozgur Ozer, and Alexander Schniedermann
Chair: Inge van der Weijden
Academic Elitism: Parental Education and the Career Experiences of Faculty in U.S. Institutions
The relationship between academic seniority and scientific production at the organisational level
Who writes what? The academic age patterns of review genres in biomedicine
Presenters: Thamyres Choji, Soohong Eum, and Natalija Matveeva
Chair: Laurens Hessels
Who Funds Whom? An Exploratory Survey of Top Journal Papers of the Small Post-Soviet Countries
Presenters: Yohanna Juk, Torger Möller, and Elena Chechik
Chair: Inge van der Weijden
Diversity, equity and inclusion: how funding agencies are addressing inequalities in research
Do female academics submit fewer grant applications than men?
Gender, Parenthood, and Academic Performance: Work-life and Work-work Balance in Russian Academia
Henry Small
Eugene Garfield Memorial Lecture: Exploring the Past and Charting the Future of Co-citation. After the lecture there will be a cocktail reception.
Tung Tung Chan, Elizabeth Gadd, William Bramwell, and Ludo Waltman
The global university rankings have become an established, if highly contested, part of the HE landscape. There have been significant critiques of the use of substandard data sources, poorly constructed reputation surveys and other invalid indicators, all arbitrarily weighted into a composite measure of university ‘quality’ arrayed in a single ranking, often without any attempt at error measurement. Despite their limitations they are heavily relied upon by students, faculty and policy-makers alike for decision making and both governments and institutions make considerable investments in seeking to improve their ranking performance. However, in recent months there have been a number of policy developments that are likely to affect future engagement with global rankings.
In July 2022 an international coalition of stakeholders led by the European University Association (EUA), Science Europe and European Commission published the Agreement on Reforming Research Assessment. One of the four core commitments of the Agreement, is to avoid the use of university rankings in the assessment of individual researchers. The Coalition on Advancing Research Assessment (CoARA) now has over 400 signatories, all of whom will be seeking new ways to enact this and other commitments.
In October 2022, the International Network of Research Management Societies (INORMS) Research Evaluation Group developed a new initiative called More Than Our Rank. The initiative provides Higher Education Institutions with an opportunity to describe in a narrative way how much they have to offer society that is not captured by the global university rankings. A small number of signatories have already adopted the initiative which is supported by a wide range of international responsible research assessment organisations.
In December 2022, the Harnessing the Metric Tide review of indicators, infrastructures & priorities for UK responsible research assessment recommended that institutions rethink their use of university rankings, including considering signing up to More Than Our Rank.
2022 saw a number of universities starting to publicly withdraw from the rankings including three Chinese Institutions, one of which, Nanjing University, was in the Top 100. In November 2022 Yale University publicly withdrew from the US News & World Report Law Rankings due to equity-related concerns with the methodology. Six other research intensive universities quickly followed suit, and in January 2023 a further slew of institutions withdrew from the US News & World Report Medical School Rankings.
In January 2023, the United Nations University Institute for Global Public Health published a white paper on Bias and Colonialism in Global University Rankings and have established an international working group that seeks to make recommendations to mitigate these negative impacts.
A Dutch Expert Group on University Rankings will soon share the findings and recommendations of their work and the EUA Annual Conference on 20 April is dedicating its plenary session to a discussion about the role of rankings in universities’ quest for ‘excellence’.
Description of the format
A panel session is proposed to include contributions from the Coalition on Advancing Research Assessment (CoARA), the INORMS Research Evaluation Group, the CWTS Leiden Ranking and the Dutch Expert Group on University Rankings. After the presentations, the panel will have a discussion on the issue informed by audience questions, followed by an audience vote as to whether we should accept, amend or avoid the university rankings.
Ana Persic (UNESCO), Ismael Rafols and Vincent Traag (Leiden Univ.), Ameet Doshi (Princeton Univ), Diana Hicks (Georgia Tech) and Leslie Chan (Toronto Univ.)
This session aims to reflect on challenges of monitoring Open Science, both in terms of the development of OS practices, policies and their effects, building on ongoing work by the UNESCO and the EC PathOS project. There are increasing policy demands for monitoring OS in the face of many initiatives to promote OS by a variety of organisations, from universities to supranational institutions. However, given the large diversity of activities and policies associated with OS, how can these and their effects be monitored? How can we gather evidence of OS developments associated with engagement and dialogue? How can we capture the outcomes and impacts of OS initiatives, e.g. in terms of pluralising the users of scientific knowledge? How can these monitoring efforts be contextualised to account for highly disparate settings?
Alex Rushforth and Janne Pölönen
The publication of the European Agreement on Reforming Research Assessment (ARRA) in July 2022 is the latest, perhaps most high profile, attempt to date to steer research systems towards reform of how individuals, proposals and organizations are recognized and rewarded.
While calls for reform are internationally-oriented in scope, not all countries, regions and organizations have moved with equal speed or enthusiasm to enact this agenda and there are uncertainties about what such change processes entail. This special session focuses on implementation of reforms within four ‘early adopter’ institutions from the Netherlands and Finland – where a national reform program exists – as well as Denmark and Italy.
Speakers in the session have been involved in implementing reforms locally within their own research performing organizations (RPO). We will hear about the process they have gone through, opportunities and frictions they encountered and lessons learnt about resources and infrastructure needed to support such efforts. Being among the first movers in this space, their insights will provide valuable points for discussion, debate and learning for institutions and members of the STI community to engage with ARRA.
Session
The hybrid session will be chaired by Janne Pölönen (Federation of Finnish Learned Societies) and Alex Rushforth (Leiden University), with a short introduction and Q&A discussion with the audience. Speakers (all confirmed) include Sarah de Rijcke (Leiden University), Laura Niemi (University of Turku), Birger Larsen (University of Aalborg) and Francesca di Donato (National Research Council of Italy).
Denis Newman-Griffis, Anne Jorstad, Raquel Roses, Michael Thelwall, and James Wilsdon
This special session aims to explore key cross-cutting considerations in the use of AI approaches in research funding and evaluation. As this is an emerging area of best practice, rather than taking a didactic approach the panel is oriented towards creating a space of mutual earning and knowledge exchange among the wide range of stakeholders in AI and research funding represented in the STI audience. The panel membership draws on partners involved in the Research in Research Institute’s GRAIL project on Getting Responsible about AI and Machine Learning in Research Funding and Evaluation, and the panel aims to foster the community-driven learning and exchange that underpins the GRAIL project.
Tjitske Holtrop, Thed van Leeuwen, Sarah de Rijcke, Marta Sienkiewicz, and Ludo Waltman
In the past decade, movements to reform research evaluation have been growing internationally, concerned about misapplications of overly narrow performance criteria, at the expense of other qualities or policy priorities such as open science, team science, diversity and inclusion, societal relevance, mission-oriented and transdisciplinary research or citizen science. These debates have brought to the fore what Science Studies has long understood to be the case: scientific knowledge is a collective achievement of many kinds of actors who engage with one another in academic activities in many different ways and for many different reasons. Yet, our structures and cultures of recognition and reward haven’t followed suit, with quantitative performance criteria (such as bibliometrics and grant income) still guiding evaluation and strategic decision making in hiring, career development, and funding. We will host a conversation that explores the struggles and solutions around recognition and reward in academia at large. What do we understand recognition and reward to mean? What are we and are we not recognizing and/or rewarding, when it comes to academic ambitions, activities, actors, and accomplishments? What concrete shifts in recognition and reward have been and are being proposed in Science Studies and beyond? What kinds of practices, collectives and ideas of science, accountability and value do these make possible? And what kind of methodologies and infrastructures are we building to facilitate making visible, and valuable, what has traditionally been invisible? We will moderate a conversation about empirical examples of, reflections on, and proposals for recognition and reward movements in any academic context.
We would like to invite four actors that represent diverse perspectives on the topic for a round table discussion, followed by an interactive conversation with the audience. We aim to write a summary report for one of the journals in our field.
Laura Rovelli, Moumita Koley, Natalia Gras, Robert McLean, Erika Kraemer-Mbula, and Ismael Rafols
In contexts of wide and growing economic, social and environmental inequality, there are increasing demands on scientific and academic ecosystems to contribute with relevant knowledge to address various “grand challenges” and “critical problems”. In the so-called “Global South”, the impact of Science, Technology and Innovation (STI) policies on the well-being of citizens, and in particular, its most vulnerable groups, is diminished by several problems, among which the distortions in research assessment systems, negatively affecting the inclusiveness of STI when it privileges the quantitative and the hegemonic-universal over contextual elements, is one of the most significative.
Based on a set of global and common principles of research assessment reform, the session invites reflection on their implementation in different situated settings and regions of the “Global South”. There is an interest in exploring innovations and experiments in reform that involve inclusiveness in evaluation, targeting regional institutional and career equity and gender perspectives; and social demands for knowledge, seeking its expression – through the organized participation of interested actors and processes of co-creation of open research agendas. An important challenge in tackling the above-mentioned issues is to address regional and local specificities in the directionality and institutional structure of STI, the different approaches for assessing research quality in complex environments, in interaction with different assessment procedures, and the need for further harmonization and coordination.
In a round table format and in a hybrid space of exchange (face-to-face-virtual), the session invites participants and interested audience to share and explore a set of analytical approaches, participatory methodologies and case studies involving grassroots movements, in order to enhance the co-design and implementation of situated and substantive transformations in the evaluation systems of the Global South in dialogue with international efforts.
Euan Adie, Rodrigo Costas, Hans Jonker, Mike Mugabushaka, Biegzat Murat, Ed Noyons, and Catherine Williams
The past few years have witnessed multiple discussions and developments regarding the measurement and study of the societal impact of Science. In this debate, altmetric approaches have been extensively explored in order to capture different forms of interactions between science and broader audiences.
One of the most interesting altmetric sources are policy documents, and particularly the mentions and use that policy documents make of scholarly objects. These include citations to scientific papers, but also mentions to universities, research organisations or even researchers. As a result, the collection and indexing of policy documents can be seen as a powerful source of scientometric data to study and understand the relationships and interactions established by policy makers with science and scholarly actors.
In recent years, several initiatives have been started to collect and develop databases of policy documents and their citations. One of the first of these initiatives was Altmetric.com, which has been indexing policy documents explicitly citing academic publications. More recently, Overton.io has been developing an extensive database of policy documents, as well as their citations, including citations not only to scientific publications, but also between policy documents.
These initiatives and databases have demonstrated the interest and value of indexing policy documents and their interactions with science. They have the potential of bringing a new understanding of science-society interactions and more specifically the role that science and its actors play in the development of (new policies). However, the development and availability of these initiatives and databases also open critical questions around the issues, challenges and potential future developments that databases may need. addressed. Some of these questions include:
In addition, with the rise of open bibliometric data platforms, another dimension which should be brought to discussion is the potential of including citation links of policy documents in open bibliometric datasets such as OpenAlex or OpenAire research Graph. Related questions include:
Aims and contribution of the special session
This special session at the STI2023 conference aims at creating a space for reflection about all these questions, enabling the exchange of views and ideas among different stakeholders interested in policy-science interactions, including policy makers, data providers, researchers, and in general, all interested participants attending the STI conference.
The main expectation of the special session is to come up with recommendations and a roadmap for the future development of more inclusive and transparent studies of policy-science interactions.
James Wilsdon and Alex Rushforth
First half of session = hybrid; second half = onsite only
In the past decade, there have been a series of high profile efforts to tackle problems of overreliance on quantitative performance indicators and incentives, including the Declaration on Research Assessment (DORA), the Leiden Manifesto, the Metric Tide, and ongoing work by the Global Research Council. Most recently the Coalition on Advancing Research Assessment (CoARA) has emerged as a significant initiative, and now has more than 500 organisations committed to sharing ideas and evidence in support of responsible and inclusive assessment systems.
The rapid momentum of CoARA reflects a turn in these debates towards implementing, sharing and scaling solutions, within an expanding agenda of responsible research assessment (RRA), which includes more emphasis on healthy work cultures; research integrity; open scholarship; and principles of equality, diversity and inclusion.
At the same time, there is a need to underpin more normative, values-based agendas for reform of research assessment with robust and open evidence and debate as to what works, and which types of interventions are most effective, or may have unintended consequences. The task of aligning international capacity for meta-research, which has also grown over this period, with the priorities and agendas of RRA is still at a relatively early stage.
As a contribution to this task, this special session at STI 2023 will mark the launch of a new initiative – A Global Observatory of Responsible Research Assessment (AGORRA) – convened by the Research on Research Institute (RoRI).
AGORRA will aim to support and strengthen comparative analysis and progressive reform of national assessment frameworks, with a view to better aligning the needs of funders and other key actors in assessment systems with emerging meta-research capabilities. Through an international network of collaborators, AGORRA will run for an initial period of 5 years (2023-2028). Its specific aims are:
To work, this initiative will require dialogue, engagement, and critical feedback from researcher, funder and broader communities. With this learning and improvement ethos in mind, we invite the STI audience to participate in this interactive session to discuss the promises, objectives, and potential pitfalls of this initiative.
The meeting will kick off first with a short overview of AGORRA by James Wilsdon and Alex Rushforth, followed by three flash pitches from AGORRA partners on the anticipated relevance of such an observatory to their respective national contexts: Steven Hill (Research England/UKRI) on the UK and future of the REF, Lin Zhang (Wuhan) on China, and Marta Wróblewska (University of Social Sciences and Humanities, Warsaw) on Poland.
The second half of the session will have an interactive format, with the audience invited into breakout groups, where they will explore the opportunities, challenges and priorities for a global observatory. Questions for discussion will include: Which interventions merit sustained attention by AGORRA? What would constitute appropriate evidence that a given intervention works? How can we improve learning and exchange across assessment systems? What might be the unintended consequences of scaling-up interventions across a range of organisational, regional and national funding contexts? These insights will then be fed back into the main group discussion.
The meeting will end by outlining future plans and identifying opportunities for the STI community to engage and contribute to AGORRA.
Susanne Koch, Dorothy Ngila, Ismael Rafols, Leandro Rodriguez Medina, Nelius Boshoff, Rodrigo Costas, Jonathan Dudek, Similo Ngwenya, Olena Strelnyk, Shizuku Sunagawa, Camilla Tetley, and Amani Uisso
According to the Agenda 2030 for Sustainable Development, reducing inequality (SDG 10) is an essential prerequisite for a world of justice and non-discrimination, and of equal opportunity permitting the full realization of human potential. Science is expected to play a key part in achieving this vision – although being as well structured by multiple forms of inequality and divides within its own system. Epistemic hierarchies not only put certain disciplines before others, but also marginalize knowledge not conforming to dominant paradigms and/or produced outside scientific centers. Aside from researchers’ position in the global research landscape, socially constructed categories such as gender, ethnicity and race affect the degree of scientific credibility ascribed to them.
The aim of this Special Session is to provide a space for exchange on scholarship that maps and examines diversity and (in-) equity in science at local, regional and global levels. In short input talks, scholars and research groups will share how they investigate these topics in specific projects, focusing specifically on relations between theory, methodology and evidence generated.
The presentations will be followed by an extended discussion between an invited discussant, presenters and the audience on the following questions:
The contributions of the Special Session relate to the broad themes of equity, diversity and inclusion as well as scholarly communication and societal impact of research.
Stefanie Haustein, Heather Woods, Maddie Hare, Isabelle Dorsch, and Carey Ming-Li Chen
The session’s objective is to bring attention to and empower the bibliometric community to take ownership of metrics education. Improving metrics literacies with the goal of reducing the misuse of bibliometric indicators is in line with current transitions towards healthier academic culture, including the Coalition for Advancing Research Assessment (CoARA) initiative. The proposed session can work as an incubator of ideas on how to effectively and efficiently communicate the knowledge of bibliometric experts to the wider audience of users of scholarly metrics. We also hope it can facilitate collaborations between various stakeholders, including bibliometric researchers and analysts, data providers and librarians.
Sabina Leonelli is Professor of Philosophy and History of Science and Director of the Centre for the Study of the Life Sciences (Egenis) at the University of Exeter, UK. Her research concerns the epistemology, history and governance of data-intensive science; modelling and data integration across the biological, environmental and health sciences; and open science and related evaluation systems in the global – and highly unequal – research landscape. She currently leads the ERC project “A Philosophy of Open Science for Diverse Research Environments” (2021-2026, www.opensciencestudies.eu ), and her most recent book on Philosophy of Open Science is now available from Cambridge University Press in Open Access format.
Trust and Truth in Research Assessment
The drive towards more responsible modes of research assessment was partly fuelled by over-confidence in poorly-conceived indicators as a source of truth. This led to a lack of trust from the research community in existing evaluative approaches. However this lack of trust extends beyond unhelpful quantitative methods and towards biased forms of qualitative assessment; towards (and between) those professional and academic actors making the assessments; and towards the role assessment plays in the neoliberalisation of higher education as a whole. The responsible research assessment movement provides an opportunity to rebuild trust in, and to debunk myths around the ‘truth’ of, evaluative approaches. This talk will explore how we might achieve this with a particular focus on the INORMS SCOPE Framework for responsible research evaluation.
Dr Elizabeth (Lizzie) Gadd
Dr Elizabeth (Lizzie) Gadd chairs the INORMS Research Evaluation Group and is Vice Chair of the Coalition on Advancing Research Assessment (CoARA). In 2022, she co-authored ‘Harnessing the Metric Tide: Indicators, Infrastructures & Priorities for UK Research Assessment’. Lizzie is the research quality and culture lead at Loughborough University, UK and champions the ARMA Research Evaluation SIG. She previously founded the LIS-Bibliometrics Forum and The Bibliomagician Blog and was the recipient of the 2020 INORMS Award for Excellence in Research Management and Leadership.
Adrian Barnett, Jennifer Byrne, Jason M. Chin, Alex Holcombe, Wolfgang Kaltenbrunner, Stephen Pinfield, Simine Vazire, Ludo Waltman, and James Wilsdon
RoRI and AIMOS will introduce MetaROR to the STI community. The proposed special session has the following objectives:
Elizabeth Pollitzer, Rachel Palmen, Kartsen Gareis, Lucio Pisacane, Susanne Burer, Helene Schiffbänker, and Martina Schraudner
The purpose of this session is to initiate a multistakeholder dialogue leading to a consensus on how to best monitor inclusion of (intersectional, e.g., age, geography, language etc) gender perspectives in science knowledge-making to support improvements in quality of outcomes for women, men, and other significant but so far underrepresented in knowledge groups. This discussion is needed to:
Lunch will be served in the Catharina foyer. if the weather is good it’s possible to enjoy your lunch in the garden outside the foyer.
The ENID board meeting takes place in the Jan-Willem Schaapfoyer during lunch time.
The ENID Assembly takes place in the Jan-Willem Schaapfoyer during lunch time.
Research misconduct investigations in China’s science funding system