Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 14 December 2022

Advancing ethics review practices in AI research

  • Madhulika Srikumar   ORCID: orcid.org/0000-0002-6776-4684 1 ,
  • Rebecca Finlay 1 ,
  • Grace Abuhamad 2 ,
  • Carolyn Ashurst 3 ,
  • Rosie Campbell 4 ,
  • Emily Campbell-Ratcliffe 5 ,
  • Hudson Hongo 1 ,
  • Sara R. Jordan 6 ,
  • Joseph Lindley   ORCID: orcid.org/0000-0002-5527-3028 7 ,
  • Aviv Ovadya   ORCID: orcid.org/0000-0002-8766-0137 8 &
  • Joelle Pineau   ORCID: orcid.org/0000-0003-0747-7250 9 , 10  

Nature Machine Intelligence volume  4 ,  pages 1061–1064 ( 2022 ) Cite this article

9234 Accesses

10 Citations

46 Altmetric

Metrics details

A Publisher Correction to this article was published on 11 January 2023

This article has been updated

The implementation of ethics review processes is an important first step for anticipating and mitigating the potential harms of AI research. Its long-term success, however, requires a coordinated community effort, to support experimentation with different ethics review processes, to study their effect, and to provide opportunities for diverse voices from the community to share insights and foster norms.

You have full access to this article via your institution.

As artificial intelligence (AI) and machine learning (ML) technologies continue to advance, awareness of the potential negative consequences on society of AI or ML research has grown. Anticipating and mitigating these consequences can only be accomplished with the help of the leading experts on this work: researchers themselves.

Several leading AI and ML organizations, conferences and journals have therefore started to implement governance mechanisms that require researchers to directly confront risks related to their work that can range from malicious use to unintended harms. Some have initiated new ethics review processes, integrated within peer review, which primarily facilitate a reflection on the potential risks and effects on society after the research is conducted (Box 1 ). This is distinct from other responsibilities that researchers undertake earlier in the research process, such as the protection of the welfare of human participants, which are governed by bodies such as institutional review boards (IRBs).

Box 1 Current ethics review practices

Current ethics review practices can be thought of as a sliding scale that varies according to how submitting authors must conduct an ethical analysis and document it in their contributions. Most conferences and journals are yet to initiate ethics review.

Key examples of different types of ethics review process are outlined below.

Impact statement

NeurIPS 2020 broader impact statements - all authors were required to include a statement of the potential broader impact of their work, such as its ethical aspects and future societal consequences of the research, including positive and negative effects. Organizers also specified additional evaluation criteria for paper reviewers to flag submissions with potential ethical issues.

Other examples include the NAACL 2021 and the EMNLP 2021 ethical considerations sections, which encourages authors and reviewers to consider ethical questions in their submitted papers.

Nature Machine Intelligence asks authors for ethical and societal impact statements in papers that involve the identification or detection of humans or groups of humans, including behavioural and socio-economic data.

NeurIPS 2021 paper checklist - a checklist to prompt authors to reflect on potential negative societal effects of their work during the paper writing process (as well as other criteria). Authors of accepted papers were encouraged to include the checklist as an appendix. Reviewers could flag papers that required additional ethics review by the appointed ethics committee.

Other examples include the ACL Rolling Review (ARR) Responsible NLP Research checklist, which is designed to encourage best practices for responsible research.

Code of ethics or guidelines

International Conference on Learning Representations (ICLR) code of ethics - ICLR required authors to review and acknowledge the conference’s code of ethics during the submission process. Authors were not expected to include discussion on ethical aspects in their submissions unless necessary. Reviewers were encouraged to flag papers that may violate the code of ethics.

Other examples include the ACM Code of Ethics and Professional Conduct, which considers ethical principles but through the wider lens of professional conduct.

Although these initiatives are commendable, they have yet to be widely adopted. They are being pursued largely without the benefit of community alignment. As researchers and practitioners from academia, industry and non-profit organizations in the field of AI and its governance, we believe that community coordination is needed to ensure that critical reflection is meaningfully integrated within AI research to mitigate its harmful downstream consequences. The pace of AI and ML research and its growing potential for misuse necessitates that this coordination happen today.

Writing in Nature Machine Intelligence , Prunkl et al. 1 argue that the AI research community needs to encourage public deliberation on the merits and future of impact statements and other self-governance mechanisms in conference submissions. We agree. Here, we build on this suggestion, and provide three recommendations to enable this effective community coordination, as more ethics review approaches begin to emerge across conferences and journals. We believe that a coordinated community effort will require: (1) more research on the effects of ethics review processes; (2) more experimentation with such processes themselves; and (3) the creation of venues in which diverse voices both within and beyond the AI or ML community can share insights and foster norms. Although many of the challenges we address have been previously highlighted 1 , 2 , 3 , 4 , 5 , 6 , this Comment takes a wider view, calling for collaboration between different conferences and journals by contextualizing this conversation against more recent studies 7 , 8 , 9 , 10 , 11 and developments.

Developments in AI research ethics

In the past, many applied scientific communities have contended with the potential harmful societal effects of their research. The infamous anthrax attacks in 2001, for example, catalysed the creation of the National Science Advisory Board for Biosecurity to prevent the misuse of biomedical research. Virology, in particular, has had long-running debates about the responsibility of individual researchers conducting gain-of-function research. Today, the field of AI research finds itself at a similar juncture 12 . Algorithmic systems are now being deployed for high-stakes applications such as law enforcement and automated decision-making, in which the tools have the potential to increase bias, injustice, misuse and other harms at scale. The recent adoption of ethics and impact statements and checklists at some AI conferences and journals signals a much-needed willingness to deal with these issues. However, these ethics review practices are still evolving and are experimental in nature. The developments acknowledge gaps in existing, well-established governance mechanisms, such as IRBs, which focus on risks to human participants rather than risks to society as a whole. This limited focus leaves ethical issues such as the welfare of data workers and non-participants, and the implications of data generated by or about people outside of their scope 6 . We acknowledge that such ethical reflection, beyond IRB mechanisms, may also be relevant to other academic disciplines, particularly those for whom large datasets created by or about people are increasingly common, but such a discussion is beyond the scope of this piece. The need to reflect on ethical concerns seems particularly pertinent within AI, because of its relative infancy as a field, the rapid development of its capabilities and outputs, and its increasing effects on society.

In 2020, the NeurIPS ML conference required all papers to carry a ‘broader impact’ statement examining the ethical and societal effects of the research. The conference updated its approach in 2021, asking authors to complete a checklist and to document potential downstream consequences of their work. In the same year, the Partnership on AI released a white paper calling for the field to expand peer review criteria to consider the potential effects of AI research on society, including accidents, unintended consequences, inappropriate applications and malicious uses 3 . In an editorial citing the white paper, Nature Machine Intelligence announced that it would ask submissions to carry an ethical statement when the research involves the identification of individuals and related sensitive data 13 , recognizing that mitigating downstream consequences of AI research cannot be completely disentangled from how the research itself is conducted. In another recent development, Stanford University’s Ethics and Society Review (ESR) requires AI researchers who apply for funding to identify if their research poses any risks to society and also explain how those risks will be mitigated through research design 14 .

Other developments include the rising popularity of interdisciplinary conferences examining the effects of AI, such as the ACM Conference on Fairness, Accountability, and Transparency (FAccT), and the emergence of ethical codes of conduct for professional associations in computer science, such as the Association for Computing Machinery (ACM). Other actors have focused on upstream initiatives such as the integration of ethics reflection into all levels of the computer science curriculum.

Reactions from the AI research community to the introduction of ethics review practices include fears that these processes could restrict open scientific inquiry 3 . Scholars also note the inherent difficulty of anticipating the consequences of research 1 , with some AI researchers expressing concern that they do not have the expertise to perform such evaluations 7 . Other challenges include concerns about the lack of transparency in review practices at corporate research labs (which increasingly contribute to the most highly cited papers at premier AI conferences such as NeurIPS and ICML 9 ) as well as academic research culture and incentives supporting the ‘publish or perish’ mentality that may not allow time for ethical reflection.

With the emergence of these new attempts to acknowledge and articulate unique ethical considerations in AI research and the resulting concerns from some researchers, the need for the AI research community to come together to experiment, share knowledge and establish shared best practices is all the more urgent. We recommend the following three steps.

Study community behaviour and share learnings

So far, there are limited studies that have explored the responses of ML researchers to the launch of experimental ethics review practices. To understand how behaviour is changing and how to align practice with intended effect, we need to study what is happening and share learnings iteratively to advance innovation. For example, in response to the NeurIPS 2020 requirement for broader impact statements, a paper found that most researchers surveyed spent fewer than two hours working on this process 7 , perhaps retroactively towards the end of their research, making it difficult to know whether this reflection influenced or shifted research directions or not. Surveyed researchers also expressed scepticism about the mandated reflection on societal impacts 7 . An analysis of preprints found that researchers assessed impact through the narrow lens of technical contributions (that is, describing their work in the context of how it contributes to the research space and not how it may affect society), thereby overlooking potential effects on vulnerable stakeholders 8 . A qualitative analysis of a larger sample 10 and a quantitative analysis of all submitted papers 11 found that engagement was highly variable, and that researchers tended to favour the discussion of positive effects over negative effects.

We need to understand what works. These findings, all drawn from studies examining the implementation of ethics review at NeurIPS 2020, point to a pressing need to review actual versus intended community behaviour more thoroughly and consistently to evaluate the effectiveness of ethics review practices. We recognize that other fields have considered ethics in research in different ways. To get started, we propose the following approach, building on and expanding the analysis of Prunkl et al. 1 .

First, clear articulation of the purposes behind impact statements and other ethics review requirements is needed to evaluate efficacy and motivate future iterations by the community. Publication venues that organize ethics review must communicate expectations of this process comprehensively both at the level of individual contribution and for the community at large. At the individual level, goals could include encouraging researchers to reflect on the anticipated effects on society. At the community level, goals could include creating a culture of shared responsibility among researchers and (in the longer run) identifying and mitigating harms.

Second, because the exercise of anticipating downstream effects can be abstract and risks being reduced to a box-ticking endeavour, we need more data to ascertain whether they effectively promote reflection. Similar to the studies above, conference organizers and journal editors must monitor community behaviour through surveys with researchers and reviewers, partner with information scientists to analyse the responses 15 , and share their findings with the larger community. Reviewing community attitudes more systematically can provide data both on the process and effect of reflecting on harms for individual researchers, the quality of exploration encountered by reviewers, and uncover systemic challenges to practicing thoughtful ethical reflection. Work to better understand how AI researchers view their responsibility about the effects of their work in light of changing social contexts is also crucial.

Evaluating whether AI or ML researchers are more explicit about the downsides of their research in their papers is a preliminary metric for measuring change in community behaviour at large 2 . An analysis of the potential negative consequences of AI research can consider the types of application the research can make possible, the potential uses of those applications, and the societal effects they can cause 4 .

Building on the efforts at NeurIPS 16 and NAACL 17 , we can openly share our learnings as conference organizers and ethics committee members to gain a better understanding of what does and does not work.

Community behaviour in response to ethics review at the publication stage must also be studied to evaluate how structural and cultural forces throughout the research process can be reshaped towards more responsible research. The inclusion of diverse researchers and ethics reviewers, as well as people who face existing and potential harm, is a prerequisite to conduct research responsibly and improve our ability to anticipate harms.

Expand experimentation of ethical review

The low uptake of ethics review practices, and the lack of experimentation with such processes, limits our ability to evaluate the effectiveness of different approaches. Experimentation cannot be limited to a few conferences that focus on some subdomains of ML and computing research — especially for subdomains that envision real-world applications such as in employment, policing and healthcare settings. For instance, NeurIPS, which is largely considered a methods and theoretical conference, began an ethics review process in 2020, whereas conferences closer to applications, such as top-tier conferences in computer vision, have yet to implement such practices.

Sustained experimentation across subfields of AI can help us to study actual community behaviour, including differences in researcher attitudes and the unique opportunities and challenges that come with each domain. In the absence of accepted best practices, implementing ethics review processes will require conference organizers and journal editors to act under uncertainty. For that reason, we recognize that it may be easier for publication venues to begin their ethics review process by making it voluntary for authors. This can provide researchers and reviewers with the opportunity to become familiar with ethical and societal reflection, remove incentives for researchers to ‘game’ the process, and help the organizers and wider community to get closer to identifying how they can best facilitate the reflection process.

Create venues for debate, alignment and collective action

This work requires considerable cultural and institutional change that goes beyond the submission of ethical statements or checklists at conferences.

Ethical codes in scientific research have proven to be insufficient in the absence of community-wide norms and discussion 1 . Venues for open exchange can provide opportunities for researchers to share their experiences and challenges with ethical reflection. Such venues can be conducive to reflect on values as they evolve in AI or ML research, such as topics chosen for research, how research is conducted, and what values best reflect societal needs.

The establishment of venues for dialogue where conference organizers and journal editors can regularly share experiences, monitor trends in attitudes, and exchange insights on actual community behaviour across domains, while considering the evolving research landscape and range of opinions, is crucial. These venues would bring together an international group of actors involved throughout the research process, from funders, research leaders, and publishers to interdisciplinary experts adopting a critical lens on AI impact, including social scientists, legal scholars, public interest advocates, and policymakers.

In addition, reflection and dialogue can have a powerful role in influencing the future trajectory of a technology. Historically, gatherings convened by scientists have had far-reaching effects — setting the norms that guide research, and also creating practices and institutions to anticipate risks and inform downstream innovation. The Asilomar Conference on Recombinant DNA in 1975 and the Bermuda Meetings on genomic data sharing in the 1990s are instructive examples of scientists and funders, respectively, creating spaces for consensus-building 18 , 19 .

Proposing a global forum for gene-editing, scholars Jasanoff and Hulburt argued that such a venue should promote reflection on “what questions should be asked, whose views must be heard, what imbalances of power should be made visible, and what diversity of views exist globally” 20 . A forum for global deliberation on ethical approaches to AI or ML research will also need to do this.

By focusing on building the AI research field’s capacity to measure behavioural change, exchange insights, and act together, we can amplify emerging ethical review and oversight efforts. Doing this will require coordination across the entire research community and, accordingly, will come with challenges that need to be considered by conference organizers and others in their funding strategies. That said, we believe that there are important incremental steps that can be taken today towards realizing this change. For example, hosting an annual workshop on ethics review at pre-eminent AI conferences, or holding public panels on this subject 21 , hosting a workshop to review ethics statements 22 , and bringing conference organizers together 23 . Recent initiatives undertaken by AI research teams at companies to implement ethics review processes 24 , better understand societal impacts 25 and share learnings 26 , 27 also show how industry practitioners can have a positive effect. The AI community recognizes that more needs to be done to mitigate this technology’s potential harms. Recent developments in ethics review in AI research demonstrate that we must take action together.

Change history

11 january 2023.

A Correction to this paper has been published: https://doi.org/10.1038/s42256-023-00608-6

Prunkl, C. E. A. et al. Nat. Mach. Intell. 3 , 104–110 (2021).

Article   Google Scholar  

Hecht, B. et al. Preprint at https://doi.org/10.48550/arXiv.2112.09544 (2021).

Partnership on AI. https://go.nature.com/3UUX0p3 (2021).

Ashurst, C. et al. https://go.nature.com/3gsQfvp (2020).

Hecht, B. https://go.nature.com/3AASZhf (2020).

Ashurst, C., Barocas, S., Campbell, R., Raji, D. in FAccT ‘22: 2022 ACM Conf. on Fairness, Accountability, and Transparency 2057–2068 (2022).

Abuhamad, G. et al. Preprint at https://arxiv.org/abs/2011.13032 (2020).

Boyarskaya, M. et al. Preprint at https://arxiv.org/abs/2011.13416 (2020).

Birhane, A. et al. in FAccT ‘ 22: 2022 ACM Conference on Fairness, Accountability, and Transparency 173–184 (2022).

Nanayakkara, P. et al. in AIES ‘ 21: Proc. 2021 AAAI/ACM Conference on AI, Ethics, and Society 795–806 (2021).

Ashurst, C., Hine, E., Sedille, P. & Carlier, A. in FAccT ‘22: 2022 ACM Conf. on Fairness, Accountability, and Transparency 2047–2056 (2022).

National Academies of Sciences, Engineering, and Medicine. https://go.nature.com/3UTKOEJ (date accessed 16 September 2022).

Nat. Mach. Intell . 3 , 367 (2021).

Bernstein, M. S. et al. Proc. Natl Acad. Sci. USA 118 , e2117261118 (2021).

Pineau, J. et al. J. Mach. Learn. Res. 22 , 7459–7478 (2021).

Google Scholar  

Benjio, S. et al. Neural Information Processing Systems. https://go.nature.com/3tQxGEO (2021).

Bender, E. M. & Fort, K. https://go.nature.com/3TWnbua (2021).

Gregorowius, D., Biller-Andorno, N. & Deplazes-Zemp, A. EMBO Rep. 18 , 355–358 (2017).

Jones, K. M., Ankeny, R. A. & Cook-Deegan, R. J. Hist. Biol. 51 , 693–805 (2018).

Jasanoff, S. & Hurlbut, J. B. Nature 555 , 435–437 (2018).

Partnership on AI. https://go.nature.com/3EpQwY4 (2021).

Sturdee, M. et al. in CHI Conf.Human Factors in Computing Systems Extended Abstracts (CHI ’21 Extended Abstracts) ; https://doi.org/10.1145/3411763.3441330 (2021).

Partnership on AI. https://go.nature.com/3AzdNFW (2022).

DeepMind. https://go.nature.com/3EQyUWT (2022).

Meta AI. https://go.nature.com/3i3PBVX (2022).

Munoz Ferrandis, C. OpenRAIL; https://huggingface.co/blog/open_rail (2022).

OpenAI. https://go.nature.com/3GyZPYk (2022).

Download references

Author information

Authors and affiliations.

Partnership on AI, San Francisco, CA, USA

Madhulika Srikumar, Rebecca Finlay & Hudson Hongo

ServiceNow, Santa Clara, CA, USA

Grace Abuhamad

The Alan Turing Institute, London, UK

Carolyn Ashurst

OpenAI, San Francisco, CA, USA

Rosie Campbell

Centre for Data Ethics and Innovation, London, UK

Emily Campbell-Ratcliffe

Future of Privacy Forum, Washington, DC, USA

Sara R. Jordan

Design Research Works, Lancaster University, Lancaster, UK

Joseph Lindley

Belfer Center for Science and International Affairs, Harvard Kennedy School, Cambridge, MA, USA

Aviv Ovadya

Meta AI, Menlo Park, CA, USA

Joelle Pineau

McGill University, Montreal, Canada

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Madhulika Srikumar .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Machine Intelligence thanks Carina Prunkl and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Srikumar, M., Finlay, R., Abuhamad, G. et al. Advancing ethics review practices in AI research. Nat Mach Intell 4 , 1061–1064 (2022). https://doi.org/10.1038/s42256-022-00585-2

Download citation

Published : 14 December 2022

Issue Date : December 2022

DOI : https://doi.org/10.1038/s42256-022-00585-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

How to design an ai ethics board.

  • Jonas Schuett
  • Ann-Katrin Reuel
  • Alexis Carlier

AI and Ethics (2024)

Machine learning in precision diabetes care and cardiovascular risk prediction

  • Evangelos K. Oikonomou
  • Rohan Khera

Cardiovascular Diabetology (2023)

Generative AI entails a credit–blame asymmetry

  • Sebastian Porsdam Mann
  • Brian D. Earp
  • Julian Savulescu

Nature Machine Intelligence (2023)

Recommendations for the use of pediatric data in artificial intelligence and machine learning ACCEPT-AI

  • V. Muralidharan

npj Digital Medicine (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

research ethics review article

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Research ethics review during the COVID-19 pandemic: An international study

Roles Conceptualization, Formal analysis, Investigation, Methodology, Visualization, Writing – original draft

Current address: Dalla Lana School of Public Health, University of Toronto, Toronto, Canada

Affiliation Lunenfeld-Tanenbaum Research Institute, Bridgepoint Collaboratory for Research and Innovation, Sinai Health, Toronto, Canada

ORCID logo

Roles Conceptualization, Writing – review & editing

Affiliation Institute for the History and Philosophy of Science and Technology, University of Toronto, Toronto, Canada

Roles Conceptualization, Funding acquisition, Methodology, Validation, Writing – review & editing

Affiliation School of Public Health, The University of Sydney, Sydney, Australia

Roles Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Supervision, Writing – review & editing

Affiliations Lunenfeld-Tanenbaum Research Institute, Bridgepoint Collaboratory for Research and Innovation, Sinai Health, Toronto, Canada, Dalla Lana School of Public Health, University of Toronto, Toronto, Canada

Roles Conceptualization, Funding acquisition, Investigation, Methodology, Project administration, Resources, Supervision, Validation, Writing – review & editing

* E-mail: [email protected]

Affiliation Faculty of Health Sciences, Western University, London, Canada

  • Fabio Salamanca-Buentello, 
  • Rachel Katz, 
  • Diego S. Silva, 
  • Ross E. G. Upshur, 
  • Maxwell J. Smith

PLOS

  • Published: April 16, 2024
  • https://doi.org/10.1371/journal.pone.0292512
  • Reader Comments

Fig 1

Research ethics review committees (ERCs) worldwide faced daunting challenges during the COVID-19 pandemic. There was a need to balance rapid turnaround with rigorous evaluation of high-risk research protocols in the context of considerable uncertainty. This study explored the experiences and performance of ERCs during the pandemic. We conducted an anonymous, cross-sectional, global online survey of chairs (or their delegates) of ERCs who were involved in the review of COVID-19-related research protocols after March 2020. The survey ran from October 2022 to February 2023 and consisted of 50 items, with opportunities for descriptive responses to open-ended questions. Two hundred and three participants [130 from high-income countries (HICs) and 73 from low- and middle-income countries (LMICs)] completed our survey. Respondents came from diverse entities and organizations from 48 countries (19 HICs and 29 LMICs) in all World Health Organization regions. Responses show little of the increased global funding for COVID-19 research was allotted to the operation of ERCs. Few ERCs had pre-existing internal policies to address operation during public health emergencies, but almost half used existing guidelines. Most ERCs modified existing procedures or designed and implemented new ones but had not evaluated the success of these changes. Participants overwhelmingly endorsed permanently implementing several of them. Few ERCs added new members but non-member experts were consulted; quorum was generally achieved. Collaboration among ERCs was infrequent, but reviews conducted by external ERCs were recognized and validated. Review volume increased during the pandemic, with COVID-19-related studies being prioritized. Most protocol reviews were reported as taking less than three weeks. One-third of respondents reported external pressure on their ERCs from different stakeholders to approve or reject specific COVID-19-related protocols. ERC members faced significant challenges to keep their committees functioning during the pandemic. Our findings can inform ERC approaches towards future public health emergencies. To our knowledge, this is the first international, COVID-19-related study of its kind.

Citation: Salamanca-Buentello F, Katz R, Silva DS, Upshur REG, Smith MJ (2024) Research ethics review during the COVID-19 pandemic: An international study. PLoS ONE 19(4): e0292512. https://doi.org/10.1371/journal.pone.0292512

Editor: Collins Atta Poku, Kwame Nkrumah University of Science and Technology, GHANA

Received: September 5, 2023; Accepted: March 23, 2024; Published: April 16, 2024

Copyright: © 2024 Salamanca-Buentello et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: All relevant data for this study are within the paper and its Supporting Information files. Additionally, the raw survey data are available from the figshare database ( https://doi.org/10.6084/m9.figshare.24076704 ).

Funding: This study was funded by Canadian Institutes of Health Research grant #C150-2019-11 ( https://cihr-irsc.gc.ca/e/193.html ) awarded to MJS. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

Introduction

The ethical review of research protocols during public health emergencies (PHEs) such as the COVID-19 pandemic is a daunting endeavour. Committees tasked with assessing the ethical acceptability of research projects, which we refer to as ethics review committees (ERCs) but are also variably called research ethics boards, research ethics committees, ethics review boards, and institutional review boards, face challenges to reviewing research protocols swiftly while maintaining a high degree of rigour, all under suboptimal conditions and uncertainty. ERCs must balance the urge for rapid turnaround and flexibility with the requirement for intense scrutiny given that new projects often propose innovative but high-risk diagnostic, therapeutic, or preventive approaches to address the PHE. This is especially challenging in the case of countries with fragile health systems, poor infrastructure, and little experience conducting medical research, and also of countries experiencing protracted emergencies [ 1 – 5 ].

Failure to ensure rigour and depth during rapid ethics reviews in public health emergencies may place research participants at risk [ 6 ]. In such challenging circumstances, ERCs must consider how interventions, study design, eligibility criteria, community engagement, and approaches to vulnerable populations impact scientific validity, participant autonomy, respect for persons, welfare, justice, and social value [ 2 , 7 – 9 ]. Additional demands on ERCs may include the ability to incorporate and respond swiftly to newly available knowledge, to provide monitoring and oversight of research, and to grapple with the impact of the PHE on those involved in the research process, such as research participants, investigators, and ERC members and staff [ 7 ].

Public health emergencies force ERCs to make reasonable adjustments and design innovative strategies to address the various components of research ethics review while still adhering to ethical principles [ 3 , 6 , 7 , 10 ]. Moreover, after a PHE, changes implemented to secure continued operations of ERCs must be evaluated to determine their success and whether they should be permanently put in place to improve the everyday functioning of the committees.

Given the challenges that ERCs worldwide faced during the COVID-19 pandemic, we aimed in this exploratory study to identify their experiences in the attempt to adapt to this PHE. We were particularly interested in the availability of pandemic-specific support, the promptness of protocol review, the volume of protocols received, the modifications to and innovations in operational procedures and policies and the evaluation of their outcomes, the anticipated permanence of such changes beyond the pandemic, the presence of pressure from different stakeholders on ERCs, the efforts to ensure quorum, the changes to the composition of ERCs, and the approaches to strengthen inter-ERC collaboration. To our knowledge, this is the first international, COVID-19-related study of its kind.

This international, cross-sectional, exploratory online survey was conducted by researchers from Western University, the University of Toronto, and the Lunenfeld–Tanenbaum Research Institute in Canada, and the University of Sydney in Australia, in collaboration with the World Health Organization’s COVID-19 Ethics and Governance Working Group.

Inclusion criteria

We used targeted purposive and criterion sampling to invite Chairs and members of ERCs who were actively involved in the ethics review of COVID-19 research protocols to participate in this study. To ensure eligibility of participants, the first question of the survey asked respondents to confirm whether they had reviewed COVID-19-related research protocols during the pandemic. Responding to our survey was entirely voluntary. For the purposes of this study, we considered March 2020 as the beginning of this PHE. We specifically targeted individuals from all WHO regions. Participants were assigned to either of two categories: high-income countries (HICs) or low- and middle-income countries (LMICs), according to their reported country of residence. To do this, we used the World Bank classification of countries ( https://datahelpdesk.worldbank.org/knowledgebase/articles/906519-world-bank-country-and-lending-groups ), which is based on gross national income per capita. We adopted this widely used categorization notwithstanding its limitations in terms of hiding power imbalances and reducing important differences among countries to questions of economics [ 11 ].

Survey questionnaire

The complete questionnaire is available as S1 Appendix . The overall structure and flow of the survey questionnaire, which consisted of a main “trunk” of 37 items organized into 11 thematic categories, is shown in Fig 1 . As can be seen in this figure, eight of these items branched into different survey flow elements based on respondents’ answers; seven of these eight items branched into elements with questions (six of them contained two questions). Thus, in total, the questionnaire, written in English, included 50 questions. We privileged close-ended over open-ended questions, but we allowed respondents the opportunity to provide additional comments for some items. We pilot-tested the online questionnaire with a small group of experts who fulfilled the inclusion criteria. This helped polish the wording of the questions and also assess and improve the logistics of the administration of the survey.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0292512.g001

Data collection

The invitation to participate in the survey explained the nature and purpose of our study, the inclusion criteria used to select participants, a summary of the procedures involved, and the URL link to the survey. These invitations were initially distributed by email by the WHO’s COVID-19 Ethics and Governance Working Group through the email listserv of the 13th Global Summit of National Ethics Committees (an event that took place in September 2022). The Working Group identified additional potential participants among its extensive contact networks. We also circulated the invitation to experts identified by the research team. Invitations could also be forwarded to individuals designated by ERCs.

Our survey was active from October 11, 2022, to February 28, 2023. We used the Qualtrics Experience Management (XM) online platform to administer the questionnaire, which was open only to individuals who received the invitation with the link to the survey.

Data analysis

The analysis of the findings of this exploratory study employed descriptive statistics and stratified the comparison between responses of participants from HICs with those of participants from LMICs. To facilitate the examination of the results, tables were prepared showing the number and percentage of respondents from HICs and LMICs who answered each question in the survey. Qualitative data (descriptive responses to open-ended questions) were evaluated using thematic analysis and the constant comparative method.

Research ethics approval

Our study received approval from Western University’s Non-Medical Research Ethics Board (Protocol ID 120455). Additionally, it was evaluated by the World Health Organization Research Ethics Review Committee (Protocol ID CERC.0181) and was exempted from further review. The use of the Qualtrics platform facilitated data collection and management while respecting the privacy and confidentiality of participants. Respondents indicated their consent to participate in our survey by selecting a button labelled “I consent” at the end of the letter of information and consent, which appeared on the first page of the questionnaire. Responses were anonymous to protect participants’ privacy and confidentiality and encourage the open sharing of experiences.

Reporting survey results

While no universally agreed-upon reporting standards for surveys exist like there are for clinical trials and meta-analyses, the EQUATOR (Enhancing the QUAlity and Transparency Of health Research) Network has recently (2021) proposed a checklist for reporting of survey studies [ 12 ]. EQUATOR has been responsible for the development of several of these standards, including the Consolidated Standards of Reporting Trials (CONSORT) for randomized control trials; Strengthening the Reporting of Observational studies in Epidemiology (STROBE) for observational studies; and Preferred Reporting Items for Systemic Reviews and Meta-analyses (PRISMA) for systematic reviews and meta-analyses. Even though this is not a globally recognized official standard, it is quite useful, and we have ensured that our manuscript fulfills all the reporting requirements included in this checklist.

Characterization of survey respondents

Two hundred and eighty-one individuals opened our survey. Of these, 250 answered the first question, which confirmed whether respondents fulfilled our inclusion criteria, and with which we could confirm their eligibility. Forty-three individuals explicitly indicated that they did not meet our criteria. Thus, the initial number of suitable respondents was 207. As expected in surveys such as ours in which participants are allowed to skip questions, the number of respondents per question varied slightly, from a maximum of 207 to a minimum of 147.

Of the 204 participants who indicated their sex / gender, 120 (58.8%) were female, 82 (40.2%) were male, one (0.5%) preferred to self-describe, and one (0.5%) preferred not to disclose this information ( Box 1 , Table a ). The proportion of females was higher in HICs (64.9%) than in LMICs (47.9%); thus, the distribution of respondents by sex / gender was more balanced in LMICs than in HICs. As shown in Box 1 , Table b , more than three quarters of respondents (77.9%) were 45 years old or older. This was true for both HICs and LMICs. Most respondents provided ethics review for national bodies such as national ethics committees or national public health organizations; more than a quarter participated in ERCs linked to academic or research institutions ( Box 1 , Table c ). However, while almost half of respondents from HICs were members of ERCs affiliated with national bodies, only one quarter of participants from LMICs provided ethics review for such organizations. In contrast, in LMICs, 40% of respondents were members of ERCs associated with academic or research institutions. Furthermore, only 20% of participants from LMICs and 13.9% of participants from HICs provided ethics review for health care facilities.

Box 1. Characterization of survey participants

thumbnail

https://doi.org/10.1371/journal.pone.0292512.t001

In terms of the WHO region for which ethics review was provided, all regions were represented in our survey ( S1 Table in S2 Appendix ). More than one third of respondents reviewed research protocols from Europe, almost one fifth from the Americas, one tenth from Africa, and less than one tenth each from the other WHO regions.

Table 1 shows the number of respondents by country of residence. Participants from 48 countries (19 HICs and 29 LMICs) responded to our survey. Of the 203 individuals who indicated their country of residence, 130 (64%) were from HICs and 73 (36%) from LMICs. There was a large contingent of respondents from the UK (93).

thumbnail

https://doi.org/10.1371/journal.pone.0292512.t002

Two thirds of respondents had six or more years of experience as ERC members. This is true for participants from both HICs and LMICs ( S2 Table in S2 Appendix ).

As shown in S3 Table in S2 Appendix , about one half of respondents (52%) were involved in only one ERC. This pattern was common for participants from HICs and LMICs. However, more than one third of respondents from HICs participated in three or more ERCs during the COVID-19 pandemic. Of those who indicated involvement with multiple ERCs, close to one half specified that such participation was simultaneous ( S4 Table in S2 Appendix ).

Support for the operation of ERCs during the pandemic

As shown in S5 Table in S2 Appendix , an overwhelming majority (78.4%) of respondents indicated that their ERCs received no additional support for the operation of their committees during the pandemic. This lack of support was more pronounced in the case of ERCs in LMICs. For the minority of ERCs that did receive support, this consisted mainly of administrative and human resources, with one quarter of respondents from LMICs stating that their ERCs also received financial support, in contrast to only 12.5% of those from HICs ( S6 Table in S2 Appendix ). In terms of specific areas supported, participants from both HICs and LMICs mentioned teleconferencing and virtual meeting capabilities, information technology, support staff, assistance for ERC reviewers, and training of ERC members ( Table 2 ). Interestingly, while 20% of respondents from HICs chose ERC support staff as one of the areas that received assistance, only 7.5% of those from LMICs did.

thumbnail

https://doi.org/10.1371/journal.pone.0292512.t003

In their descriptive responses, participants alluded to support for covering the costs of using online platforms for meetings and protocol review, and for acquiring or upgrading hardware such as laptops and webcams. In one ERC, members were able to claim costs of setting up teleconferencing and of telephone calls if dialling into a meeting. In other ERCs, information technology training was offered, along with technical support for the use of online platforms. It is important to note that almost half of respondents from HICs, but close to the totality (91.4%) of those from LMICs, indicated that their ERCs lacked any pre-pandemic financial planning that included provisions for the support of the committees during a public health emergency ( S7 Table in S2 Appendix ).

Modification of existing procedures or policies

Respondents from both HICs and LMICs overwhelmingly (more than 75% of participants in both cases) reported that their ERCs modified existing procedures or policies to operate during the pandemic ( S8 Table in S2 Appendix ). The most frequently modified domain was meeting logistics, followed by meeting frequency and procedures for protocol review and approval ( S9 Table in S2 Appendix ).

In terms of modifications to review procedures, several participants pointed out in their descriptive responses that their ERCs fast-tracked the review of pandemic-related studies, shortening the timeline to review and approve protocols. ERC members were expected to complete the review of these protocols within a few days and, in some cases, 24 hours. To facilitate such a quick turnaround, some ERCs created special sub-committees that would conduct very fast protocol reviews. Moreover, participants emphasized the importance of simplifying and increasing the flexibility of administrative processes. For example, several respondents indicated that their ERCs switched entirely to the use of online platforms for protocol review, eliminating the need for paper documents.

Numerous participants stated that all ERC meetings were conducted virtually (as opposed to face-to-face) during the pandemic, which, in their view, enabled ERC members and researchers to participate regardless of geographical location, prevented contagion, and allowed rapid turnaround of reviews. Even in the case of virtual sessions, all other full meeting requirements such as quorum had to be met. Some ERCs modified their meetings to open a permanent slot in their agendas for COVID-19-related research or added urgent full meetings to discuss top-priority pandemic-related trial protocols. In other cases, members were permanently available to review COVID-19-related protocols, with those pertaining to other topics addressed less frequently.

While most respondents acknowledged the advantages of using online platforms during the pandemic to organize ERC meetings and to review research protocols, several participants highlighted the challenges that the use of such technologies entailed, particularly for new and more senior members of the ERCs who felt uncomfortable using these platforms. Some individuals deplored the loss of quality in the dynamics among ERC members (stilted conversations, fewer informal interactions) compared against the benefits of face-to-face meetings. Resistance to working online for some was compounded by difficulties accessing the internet and the lack of adequate electronic devices to do so.

Regarding the modification of protocol requirements, respondents mentioned adding safety procedures for study participants and members of the research teams, facilitating remote documentation of consent, and changing the policies regarding the use of non-anonymized data from health service and public health records for the duration of the pandemic to allow more unrestrained use of data. Some ERCs transitioned from requiring the physical signature of conflict-of-interest declaration forms to an email declaration.

As shown in Table 3 , only a minority of respondents indicated that their ERCs conducted a formal evaluation of the success or failure of modifying existing procedures or policies (28% of participants from HICs and 17% of those from LMICs). More than one quarter of respondents did not know whether such modifications had been assessed.

thumbnail

https://doi.org/10.1371/journal.pone.0292512.t004

Design and implementation of new procedures and policies

Almost two-thirds of respondents from both HICs and LMICs reported that their ERCs had designed and implemented new procedures and policies to address the challenges brought about by the pandemic ( S10 Table in S2 Appendix ). As in the case of modifications to ERC processes, innovations occurred mainly in the areas of meeting logistics and frequency, and of procedures for protocol review and approval ( S11 Table in S2 Appendix ). This was the case for ERCs in both HICs and LMICs.

In their descriptive responses, participants mentioned the development and implementation of new standard operating procedures (SOPs) and the integration of ad hoc committees, some including specialists, for urgent, accelerated protocol review. Such fast-track ERCs could review studies in one or two days, considerably shortening the time to complete reviews. One respondent considered the most successful innovation to be the formation of a “pool” of committee members ready to be convened at very short notice to quickly review COVID-19-related protocols. Such an ad hoc committee enabled applications to be reviewed and turned around very quickly. Of note, survey participants did not explicitly specify in their descriptive responses whether these ad hoc committees were integrated exclusively by existing ERC members, or if external experts and specialists were invited to take part in them. Similarly, respondents did not comment on whether existing SOPs contemplated the creation of ad hoc committees, on the way these entities were governed, or on the modifications made to SOPs to allow the integration of such committees.

The proportion of ERCs that formally evaluated the success or failure of new procedures and policies was analogous to that described for modifications to SOPs. Table 4 shows that just 37% of respondents from HICs and 21% of those from LMICs reported that their ERCs conducted such an evaluation.

thumbnail

https://doi.org/10.1371/journal.pone.0292512.t005

Permanently putting into effect modifications and innovations

A substantial majority of respondents (almost three quarters of those from HICs and more than four-fifths of those from LMICs) stated that many of the modifications and innovations to operating procedures implemented during the pandemic should be permanently put into effect ( S12 Table in S2 Appendix ), particularly in the areas of meeting logistics and frequency, procedures for protocol review and approval, and training of ethics review committee members in new or modified procedures ( S13 Table in S2 Appendix ). Several participants argued in their descriptive responses that virtual online meetings should be a permanent feature of ERC operations, as they increase efficiency and preclude many of the disadvantages of face-to-face meetings. Another recommendation was to enable the integration of ad hoc committees during times of increased demand. Similarly, respondents emphasized the relevance of facilitating the incorporation of new expert members to the ERCs as required. However, 20% of participants from HICs and 50% of those from LMICs indicated that their ERCs had no support to permanently implement modifications or innovations established during the COVID-19 pandemic ( S14 Table in S2 Appendix ).

Policies, procedures, and guidelines for public health emergencies

It is noteworthy that almost half of respondents from HICs and three-quarters of participants from LMICs indicated that their ERCs did not have internal policies, procedures, or guidelines before the pandemic that could orient members regarding the functioning of the committees during PHEs ( S15 Table in S2 Appendix ). Regarding the use of internal guidelines, some ERCs adapted existing documents, while others developed entirely new procedures. In the absence of specific internal guidelines, some SOPs explicitly privileged expedited review during health crises.

In contrast to the widespread absence of internal guidelines, the ERCs of one quarter of respondents from HICs and of almost half of those from LMICs used external guidelines not developed by their committees to govern their operation during the pandemic ( S16 Table in S2 Appendix ). Members of several committees referred to publicly available national and international guidelines. A selection of the most consulted documents appears in Box 2 .

Box 2. National and international external guidelines* that survey respondents reported were used by their ERCs to manage operations during the COVID-19 pandemic

International health organizations.

• Council for International Organizations of Medical Sciences, & World Health Organization (2016). International Ethical Guidelines for Health-related Research Involving Humans (Fourth Ed.). Council for International Organizations of Medical Sciences. https://doi.org/10.56759/rgxl7405

• Pan-American Health Organization (2020). Guidance for ethics oversight of COVID-19 research in response to emerging evidence. https://iris.paho.org/handle/10665.2/53021

• Pan-American Health Organization (2020). Guidance and strategies to streamline ethics review and oversight of COVID-19-related research. https://iris.paho.org/handle/10665.2/52089

• Pan-American Health Organization (2020). Template and operational guidance for the ethics review and oversight of COVID-19-related research. https://iris.paho.org/handle/10665.2/52086

• Pan-American Health Organization (2022). Catalyzing ethical research in emergencies. Ethics guidance, lessons learned from the COVID-19 pandemic, and pending agenda. https://iris.paho.org/handle/10665.2/56139

• Red de América Latina y el Caribe de Comités Nacionales de Bioética—United Nations Educational, Scientific and Cultural Organization (UNESCO) (2020). Ante las investigaciones biomédicas por la pandemia de enfermedad infecciosa por coronavirus Covid-19. https://redbioetica.com.ar/wp-content/uploads/2020/03/Declaracion-RED-ALAC-CNBS-Investigaciones-Covid-19.pdf

• World Health Organization (2016). Guidance for managing ethical issues in infectious disease outbreaks. World Health Organization. https://apps.who.int/iris/handle/10665/250580

• World Health Organization (2020). Key criteria for the ethical acceptability of COVID-19 human challenge studies. https://apps.who.int/iris/handle/10665/331976

• World Health Organization (2020). Guidance for research ethics committees for rapid review of research during public health emergencies. https://apps.who.int/iris/handle/10665/332206

• World Health Organization (‎2020)‎. Ethical standards for research during public health emergencies: distilling existing guidance to support COVID-19 R&D. https://apps.who.int/iris/handle/10665/331507

Bioethics centres

• Nuffield Council of Bioethics (2020). Ethical considerations in responding to the COVID-19 pandemic. https://www.nuffieldbioethics.org/assets/pdfs/Ethical-considerations-in-responding-to-the-COVID-19-pandemic.pdf

The Hastings Center: Berlinger N et al . (2020). Ethical Framework for Health Care Institutions Responding to Novel Coronavirus SARS-CoV-2 (COVID-19). Guidelines for Institutional Ethics Services Responding to COVID-19. https://www.thehastingscenter.org/ethicalframeworkcovid19/

Scientific publications mentioned by respondents

• Saxena et al. (2019). Ethics preparedness: facilitating ethics review during outbreaks—recommendations from an expert panel. https://bmcmedethics.biomedcentral.com/articles/10.1186/s12910-019-0366-x

National guidelines

Resolución 908/2020. Ministerio de Salud de Argentina: https://www.argentina.gob.ar/normativa/nacional/resoluci%C3%B3n-908-2020-337359/texto

• Normativas da Comissão Nacional de Ética em Pesquisa: http://conselho.saude.gov.br/normativas-conep?view=default

• Consejo Nacional de Investigación en Salud de Costa Rica (CONIS) (2020). COMUNICADO 2: Recomendaciones para realizar investigación biomédica durante el periodo de la emergencia sanitaria por COVID-19 en Costa Rica. https://www.ministeriodesalud.go.cr/gestores_en_salud/conis/circulares/comunicado_cec_oac_oic_20082020.pdf

El Salvador

• Comité Nacional de Ética de la Investigación en Salud de El Salvador (2015). Manual de procedimientos operativos estándar para comités de ética de la investigación en salud. https://www.cneis.org.sv/wp-content/uploads/2018/07/MANUAL-CNEIS.pdf

• Indian Council of Medical Research (2017). National ethical guidelines for biomedical and health research involving human participants. https://ethics.ncdirindia.org/asset/pdf/ICMR_National_Ethical_Guidelines.pdf

• Indian Council of Medical Research (2020).National guidelines for ethics committees reviewing biomedical & health research during COVID-19 pandemic. https://main.icmr.nic.in/sites/default/files/guidelines/EC_Guidance_COVID19_06_05_2020.pdf

• Kenya Medical Research Institute Scientific and Ethics Review Unit (2019). KEMRI SERU guidelines for the conduct of research during the covid-19 pandemic in Kenya. https://www.kemri.go.ke/wp-content/uploads/2019/11/KEMRI-SERU_GUIDELINES-FOR-THE-CONDUCT-OF-RESEARCH-DURING-THE-COVID_8-June-2020_Final.pdf

Garis Panduan Pengurusan COVID-19 di Malaysia No.5 [COVID-19 Management Guidelines in Malaysia No.5] (2020). Ministry of Health of Malaysia. https://covid-19.moh.gov.my/garis-panduan/garis-panduan-kkm

• Government of Pakistan National COVID Command and Operation Center (NCOC) Guidelines (2020). [No longer available, as NCOC ceased operations on April 1, 2022)]

South Africa

• Department of Health, Republic of South Africa (2015). Ethics in Health Research: Principles, Processes and Structures (2d Ed). https://www.sun.ac.za/english/research-innovation/Research-Development/Documents/Integrity%20and%20Ethics/DoH%202015%20Ethics%20in%20Health%20Research%20-%20Principles,%20Processes%20and%20Structures%202nd%20Ed.pdf

South Korea

• Government of the Republic of Korea (2014). Bioethics and Safety Act (Act No. 12844). https://elaw.klri.re.kr/eng_mobile/viewer.do?hseq=33442&type=part&key=36

United Kingdom

• United Kingdom Health Departments / Research Ethics Service (2022). Standard Operating Procedures for Research Ethics Committees (Version 7.6). https://www.hra.nhs.uk/documents/3090/RES_Standard_Operating_Procedures_Version_7.6_September_2022_Final.pdf . [In particular, several respondents from the UK mentioned Section 9 of this document, which addresses expedited review in situations such as public health emergencies.]

• Health Research Authority (2020). https://www.hra.nhs.uk/approvals-amendments/

• Health Research Authority (2020). https://www.hra.nhs.uk/covid-19-research/covid-19-guidance-sponsors-sites-and-researchers/

• Department of Health and Social Care (2020). Coronavirus (COVID-19): notification to organisations to share information. https://www.gov.uk/government/publications/coronavirus-covid-19-notification-of-data-controllers-to-share-information

* We defined “external guidelines” as those not developed internally by participants’ ERCs

Changes in workload

Respondents stated that the workload of ERC members increased considerably during the pandemic because of the increase in the number of protocols reviewed and also due to the urgency that the approval of COVID-19-related studies demanded. More than half of participants indicated that the volume of protocols received for review increased, both for studies assigned to delegated / expedited review, and for protocols that underwent full review ( S17 Table in S2 Appendix ). The increase in the volume of protocols had unexpected consequences. For example, in one HIC, the number of applicants who were summoned to discuss their protocols with ERCs in online meetings increased proportionally to the escalation in the volume of protocols submitted. In another case, ERC members were burdened with additional tasks such as working closely with the investigators of rejected COVID-19 protocols to improve their applications until these could be approved.

In terms of the time it took ERCs to process and approve protocols during the pandemic, participants confirmed in their descriptive responses that the turnaround time for ERC review was markedly shortened, from weeks or even months to just a few days. In general, more than half of survey participants indicated that, before the pandemic, the duration of the review process, from the time of initial submission to full approval, was between three and eight weeks ( S18 Table in S2 Appendix ). In contrast, during the pandemic, this process was substantially reduced to less than two weeks for both delegated / expedited review and full review. However, this decrease was more pronounced in HICs than in LMICs ( S19 Table in S2 Appendix ). Unsurprisingly, the approval of COVID-19-related research protocols was faster than that of non-COVID-19 studies. More than two-thirds of respondents indicated that delegated / expedited review of COVID-19-related protocols took less than five weeks; this was the case for more than half of full reviews. The process was longer in LMICs, though ( S20 Table in S2 Appendix ). Conversely, protocol review was slightly longer for non-COVID-19 studies, except in the case of full reviews in LMICs, which participants reported took between three and more than 12 weeks ( S21 Table in S2 Appendix ).

Presence of external pressure on ERCs

While only 14% of respondents from HICs reported that their ERCs were subjected to different types of external pressure to both approve and reject research protocols, one third of participants from LMICs (34%) faced such a challenge ( S22 Table in S2 Appendix ). The perceived demand mentioned most frequently involved pressures to rush studies through the review process at the expense of proper examination and ethical oversight. This was especially evident in the case of COVID-19 vaccine clinical trials. Some participants highlighted their defense of the autonomy of their ERCs in the face of external influences by using, for example, research policies developed and implemented specifically for the pandemic as a tool for transparent decision-making and as a safeguard against external pressures. One ERC successfully resisted government pressure to approve a research protocol related to a domestic PCR test, human trials of locally developed ventilators, and a placebo-controlled vaccine trial proposed despite the existence of six emergency-authorized vaccines and ongoing mass vaccination.

While some respondents acknowledged that entities such as national governments were understandably impatient for preventive, diagnostic, and therapeutic measures to combat the pandemic, they still emphasized the need for proper and thorough review of research protocols. One respondent stated that institutional authorities that favoured or sponsored certain studies sought their immediate approval and considered ERCs as inconvenient hindrances to achieve this goal. Several participants described instances in which ERCs, particularly in LMICs, received pressure to approve alternative medicine clinical trials.

Types of COVID-19 protocols reviewed by ERCs

Given the range of challenges brought about by the COVID-19 pandemic, it was interesting to determine the proportion of protocols received by ERCs according to the research area in which they could be classified, namely, diagnostics, therapeutics, vaccines, pharmacovigilance, or other topics such as behavioural research. Our results suggest that between one-half and two-thirds of ERCs received from one to 10 studies in each area ( Table 5 ). In other words, all areas of COVID-19 research were covered in these protocols submitted to ERCs of both HICs and LMICs. However, it must be noted that between one-third and one-half of respondents could not classify the protocols received by their ERCs (perhaps due to not tracking such information).

thumbnail

https://doi.org/10.1371/journal.pone.0292512.t006

Prioritization of protocols for ethics review

Overwhelmingly, and as expected, participants reported that their ERCs considered COVID-19-related protocols urgent and thus prioritized their review and approval over that of others, particularly in terms of expediting the review of these studies and privileging their discussion during committee meetings. More than three quarters of respondents from HICs and almost two-thirds of those from LMICs indicated that their ERCs gave priority to COVID-19-related studies ( S23 Table in S2 Appendix ). In fact, in one case, an ERC stopped reviewing non-COVID-19-related protocols altogether. Some ERCs gave precedence to the review of COVID-19-related studies according to the priorities determined by their national governments. Others were assigned studies by an ad hoc national entity that triaged the research protocols. Interestingly, however, as shown in S23 Table in S2 Appendix , 15% of respondents from HICs and 27% of those from LMICs stated that their ERCs did not give priority to pandemic-related studies.

Furthermore, our results show that almost one-third of respondents from HICs and almost half of those from LMICs indicated that, for their ERCs, the review of some types of COVID-19-related studies took precedence over that of others ( S24 Table in S2 Appendix ). In their descriptive responses, participants explicitly mentioned prioritizing clinical trials, particularly those focused on COVID-19 vaccine development and safety monitoring; studies related to therapeutic agents for the treatment of COVID-19; protocols about diagnostics and prognostic factors; epidemiological studies, including those related to the natural history of COVID-19 and serosurveillance; and research affecting public health policy. In the case of one ERC in a HIC with very low infection rates resulting from successful public health measures, priority was given to vaccine trials and observational research on vaccine monitoring and community incidence.

Membership of ERCs during the pandemic

One of the main challenges that ERCs worldwide faced during the pandemic was making certain that the number and expertise of their members enabled the efficient operation of the committees under such demanding circumstances. Most survey respondents indicated that their ERCs were able to ensure quorum (80% of participants from HICs, but only 60% of those from LMICs); however, one-third of respondents from LMICs stated that quorum in their ERCs was infrequently met ( S25 Table in S2 Appendix ). Two-thirds of participants from HICs and three quarters of those from LMICs reported that their committees had taken measures to ensure continuity of adequate review of research protocols in case existing members became unavailable due to the pandemic ( S26 Table in S2 Appendix ).

ERCs in both HICs and LMICs did invite new members or appointed alternate ones to ensure quorum and inclusion of individuals with appropriate expertise. Yet, consulting expert non-members seems to have been preferred to incorporating individuals to the committee. Only 13% of respondents from HICs and 24% of those from LMICs indicated that their ERCs had added new members to accelerate protocol review during the pandemic ( S27 Table in S2 Appendix ). Similarly, 11% of participants from HICs and 37% of those from LMICs added new members with specific expertise ( S28 Table in S2 Appendix ). In contrast, almost one-third of individuals from HICs, but close to two-thirds of those from LMICs, stated that their committees had consulted expert non-members to address novel areas of research or to provide enhanced scrutiny of research protocols ( S29 Table in S2 Appendix ). In their descriptive responses, participants expressed that, in some cases, ERCs incorporated new members who were available at quick notice and comfortable with the use of online platforms for meetings and protocol review. A similar approach consisted of integrating virtual ad hoc committees solely to review COVID-19-related-protocols. For some ERCs, national legislation complicated getting additional support or adding new members. Another factor complicating the integration of ERCs was that clinical responsibilities of individuals directly in the care of COVID-19 patients soared, hindering their participation in committee meetings. One participant reported that some ERC members could not fulfill their duties in their respective ERCs because they had become highly sought-after “media celebrity” experts.

Survey respondents suggested that it would be worthwhile to assess the psychological and emotional challenges that ERC members faced when having to evaluate protocols using new, unfamiliar procedures under extreme pressure. Also, it is worth reiterating that, according to several participants, many ERC members, particularly older ones, deplored the loss of features common to face-to-face meetings, such as a warmer, more informal and welcoming environment that favoured interpersonal interactions. Other respondents expressed their desire for constructive and supportive feedback and for more appreciative and generous gestures of gratitude for the extraordinary efforts of ERCs. However, a few participants considered that being able to respond in a useful way to a public health crisis as ERC members was in itself very gratifying and validating.

National and international collaboration

While 38.5% of respondents from HICs and 40% of those from LMICs reported the presence of national and international collaboration among ERCs to standardize emergency operations and procedures during the pandemic, almost one-third of participants from HICs were unsure about the existence of such collaboration ( S30 Table in S2 Appendix ). Almost half of respondents from HICs, but more than two-thirds of those from LMICs, indicated that their ERCs did not have strategies to harmonize multiple review processes ( S31 Table in S2 Appendix ). Most participants (55% of those from HICs and 63% of those from LMICs) reported that their ERCs relied on established procedures to recognize and validate research protocol reviews conducted by other committees ( S32 Table in S2 Appendix ). About one half of respondents from HICs, but almost two-thirds of those from LMICs, affirmed that their ERCs collaborated with scientific committees that pre-reviewed or prioritized pandemic-related research protocols ( S33 Table in S2 Appendix ).

Almost 50% of participants from HICs, but little more than a third of those from LMICs, reported the presence of centralized ethics review of research protocols for multicentre studies related to COVID-19 ( S34 Table in S2 Appendix ). Conversely, one-third of respondents from HICs, but more than two-thirds of those from LMICs, stated that their ERCs did not consider the formation of Joint Scientific Advisory Committees, Data Safety Review Committees, Data Access Committees, or a Joint Ethics Review Committee integrated with representatives of ethics committees of all institutions and countries involved in COVID-19-related research ( S35 Table in S2 Appendix ).

In their descriptive responses, participants noted the need for better inter-ERC collaboration and communication at the national and international levels to share successful strategies and avoid effort duplication. A case of very successful national inter-ERC collaboration is worth mentioning. Respondents from one particular LMIC stated that, given the critical and unforeseen absence of the national entity responsible for health research ethics during the pandemic, ERCs throughout the country joined forces to create an ad hoc spontaneous informal national network of all ERC chairs and co-chairs (it also included members of the national drug regulator) to strengthen mutual support, enhance communication among ERCs, identify best practices, and share academic and ethics resources.

ERCs faced considerable challenges during the COVID-19 pandemic. Demands were placed on them to urgently review an increased volume of protocols while maintaining rigour, all under suboptimal conditions and uncertainty. Yet, our findings suggest ERCs reviewed a greater volume of protocols and did so faster than before the pandemic. Against this backdrop, our results also reveal that, despite billions of dollars having been invested into the research and development (R&D) ecosystem to support the COVID-19 research response, little to no additional resources were directed to ERCs to support or expedite their functions. This should be particularly sobering for those who raise complaints about ERCs being an “obstacle” to research [ 13 – 16 ]. It may also help to explain other challenges experienced by ERCs during the pandemic, such as the absence of internal policies or guidelines for adapting to a PHE, the collateral damage sustained from deprioritizing non-COVID-19 protocols, and the pressures felt to rush protocols through review.

Our finding that ERCs wish to sustain many of the modifications made to their operations during the COVID-19 pandemic should be interpreted in light of the fact that ERCs also report having received little or no support during the COVID-19 pandemic as well as exiguous support for the maintenance of any modifications they would like to make permanent into the future. While it is expected that the research ethics ecosystem learn from this experience and enhance operations for future threats, it is difficult to see how this will be possible without significant investment. While no one seems to disagree that the research ethics ecosystem should strive for greater efficiency and collaboration, especially during PHEs, investments are required to achieve these aims. Simply put, the experience of ERCs during the COVID-19 pandemic, while herculean in many respects, was a function of necessity and is unlikely to be sustainable.

Extant literature reporting the challenges faced by ERCs during the COVID-19 pandemic is scant and tends to be limited to the early phases of this PHE. Most studies published on this topic are confined to single countries or geographical regions, with only one study including 14 countries in Africa, Asia, Australia, and Europe[ 17 ]. Several of these contributions focus exclusively on one ERC, usually associated with an academic or health care institution. The literature includes descriptions of ERC operations during the pandemic in Central America and the Dominican Republic [ 18 ], China [ 19 ], Ecuador [ 20 ], Egypt [ 21 ], Germany [ 22 ], India [ 23 – 25 ], Iran [ 10 ], Ireland [ 26 ], Kenya [ 27 ], Kyrgyzstan [ 28 ], Latin America [ 29 ], the Netherlands [ 30 ], Pakistan [ 31 ], South Africa [ 32 , 33 ], Turkey [ 34 ], and the United States [ 35 – 37 ]. Most of these studies reported results from surveys, interviews, focus groups, and documentary analysis, including review of research protocols, ERC meeting minutes, and existing SOPs. Participants usually consisted of ERC chairpersons and members, clinical and biomedical researchers, institutional representatives, and laypeople. Most studies based on surveys and interviews included fewer than 30 respondents, with only some having more than 100 participants.

Our findings agree with this literature. Given that our study is truly global in scope, it considerably broadens what is known about the operation of ERCs during the COVID-19 pandemic and clears a path towards greater consensus on strategies to prepare for and respond during future PHEs.

In this literature, several studies emphasize the lack of support and resources to operate during the pandemic. The vast majority of ERCs made numerous modifications to their SOPs. In particular, the use of online platforms for ERC meetings and for protocol review was ubiquitous. However, ERC members across studies pointed out several disadvantages of such platforms, including lack of familiarity and technical know-how, particularly in the case of more senior members of the committees. Only a few institutions provided training, equipment, and technical support for the use of these online platforms. Consistent with our findings, almost no ERCs in these studies reported having internal policies, procedures, or guidelines to operate during a PHE. National regulations on this topic, where available, were often unclear, contradictory, rapidly changing, vague, or difficult to interpret. Conversely, several ERCs availed themselves of international guidelines ( Box 2 ), in particular those prepared by WHO [ 38 – 41 ] and PAHO [ 42 – 45 ].

In terms of changes in workload, all ERCs in the studies mentioned earlier experienced a dramatic increase in the number of COVID-19-related protocols received, which had to be reviewed very quickly in the face of pressure for expedited approvals from researchers, institutions, governments, and the media. The surge in the volume of protocols, along with shortened timelines for turnaround, severely strained ERC members’ ability to conduct rigorous, thorough, high-quality assessments. Despite feeling overwhelmed, ERC members participating in these studies managed to fulfill their responsibilities, sometimes at great personal cost.

Given the urgency to examine and approve an ever-increasing number of COVID-19 research protocols, previous studies report several strategies implemented by ERCs worldwide to prioritize their review. This frequently meant that the assessment of non-COVID-19-related studies was postponed or even abandoned. Similarly, non-interventional COVID-19 protocols were given secondary importance. Prioritization of COVID-19 protocols by type of study was rare.

Despite numerous staffing challenges, most ERCs in the studies examined were able to ensure quorum. In some cases, their institutions provided training sessions to update committee members on the rapidly changing landscape of basic and clinical knowledge about COVID-19. A less frequently used approach was to incorporate new members with relevant expertise into the ERCs. One common strategy across different countries was the integration of ad hoc committees focused exclusively on the review of COVID-19 -related protocols.

The topic of centralized review of pandemic-related research is rather contentious in this literature. While some studies report ERC members favouring such an approach, others consider that a single national ERC in charge of PHE-specific ethics review is bound to be unsuccessful due to the importance of local context in responding to PHEs. In Ecuador, forcing researchers to submit their protocols to a seven-member centralized ad hoc ERC caused considerable delays in the approval process and instead severely impeded the execution of COVID-19-related studies [ 20 ].

As shown in our results, in some countries ERCs strengthened collaboration networks during the pandemic. A notable case was the creation of a spontaneous, informal, ad hoc group in South Africa—the Research Ethics Support in COVID-19 Pandemic (RESCOP)—by ERC chairpersons and members as a response to the lack of national ethics guidance and the unexpected critical absence of the National Health Research Ethics Council at the most crucial moment in the pandemic [ 33 ]. This example highlights the clear need for national governance and oversight for research ethics to ensure accountability and responsiveness of ERCs [ 46 ].

A common topic of concern across ERCs in several countries was the set of unique challenges to obtaining informed consent during the pandemic, especially in the case of patients unable to give consent, such as those who were severely ill, isolated, or in the intensive care unit. Thus, it was necessary to find innovative alternative strategies to obtain consent.

Recommendations

The recommendations presented in Box 3 aim to strengthen the resiliency of ERCs during future public health emergencies. They are based upon our careful analysis of survey responses, and thus articulate, in a way, the concerns and expectations of the participants in our study. These recommendations closely align with the national and international guidelines listed in Box 2 , particularly with those published by WHO.

Box 3. Recommendations to strengthen the resiliency of ERCs during future public health emergencies

• Increase and assign an adequate proportion of the budgets of ethics review committees (ERCs) for:

○ their continued operation during public health emergencies (PHEs), especially in terms of online teleconferencing and review platforms

○ sustaining select modifications and innovations designed and implemented during PHEs

• Increase awareness of the value of ERCs in the research and development (R&D) ecosystem as a means of protecting research participants, ensuring social value, and promoting public trust in research outputs, rather than as a bureaucratic nuisance

• Evaluate the success or failure of modifications and innovations designed and implemented during PHEs

• Develop a “first aid kit” for each ERC that includes:

○ Existing external guidelines for committee operation during a PHE

○ Internal contingency plans designed by the ERC or its home institution that adapt existing external guidelines to local contexts

○ A directory of expert non-members available for consultation

○ Easy-to-follow checklists that incorporate the essential elements needed to function during a PHE

• Familiarize ERC members with the “first aid kit” through periodic capacity building activities

• Consider the psychological and emotional challenges that ERC members face during PHEs

• Devise strategies to defend and safeguard ERCs’ autonomy against external pressures

• Promote national and regional collaboration networks of ERCs that strengthen their resiliency during PHEs

• Facilitate collaboration between ERCs and scientific committees

Limitations and strengths

In terms of the limitations of our study, it would have been desirable to include participants from more countries, and a larger number of respondents from each country. It was probably difficult to reach a higher response rate due to “pandemic fatigue”. Non-native English speakers, especially in LMICs, may have excluded themselves from our survey. Absent or unreliable internet access could have limited the participation of some participants, particularly in LMICs. The number of ERC members that provided ethics review for health care facilities was relatively low. Despite the anonymity of their answers, respondents may have been reluctant to share specific instances of external pressures impinging upon their ERCs. The large number of participants from the UK (93 out of 281) likely skewed the results from HICs, and from experiences in the UK in particular.

We chose to present our results descriptively and did not perform any analytic tests for statistically significant differences in responses. This was because we were unable to determine a denominator, so we could not meet the requirements for many significance tests. Non-parametric tests could have been used, but we think reporting statistical significance in this context would not be informative. Non-response bias could also influence our results. This could be non-differential in its effects as our results cohere with the literature thus far reported.

To our knowledge, this is the first examination at a global level of the challenges faced by ERCs during the COVID-19 pandemic, and the strategies used to address them. Also, our study compares for the first time several dimensions of the operation of ERCs during the pandemic between committees in HICs and those in LMICs. All WHO regions were represented in our study, as participants from 48 countries (19 HICs and 29 LMICs) responded to our survey. There was an adequate balance in terms of the sex / gender of respondents. Furthermore, the ample experience of the study participants as ERC members (two thirds of respondents had six or more years of experience in this role) strengthens the generalizability of our findings. The recommendations suggested by the study participants are quite relevant to combating future public health emergencies. In general, all these strengths give credence to the validity, reliability, and accuracy of our results.

Supporting information

S1 appendix. qualtrics questionnaire..

https://doi.org/10.1371/journal.pone.0292512.s001

S2 Appendix. Supplementary tables.

https://doi.org/10.1371/journal.pone.0292512.s002

Acknowledgments

We are very grateful to all the members of ethics review committees from around the world who participated in our survey. We also want to express our gratitude to Andreas Reis and Katherine Littler of the Global Health Ethics and Governance Unit, World Health Organization, and to the following members of the World Health Organization COVID-19 Ethics and Governance Working Group, for their inputs on the survey instrument and manuscript: Aasim Ahmad, Thalia Arawi, Caesar Atuire, Oumou Bah-Sow, Anant Bhan, Ingrid Callies, Angus Dawson, Jean-François Delfraissy, Ezekiel Emanuel, Ruth Faden, Tina Garanis-Papadatos, Prakash Ghimire, Dirceu Greco, Calvin Ho, Patrik Hummel, Zubairu Iliyasu, Mohga Kamal-Yanni, Sharon Kaur, So Yoon Kim, Sonali Kochhar, Ruipeng Lei, Ahmed Mandil, Julian März, Ignacio Mastroleo, Roli Mathur, Signe Mežinska, Ryoko Miyazaki-Krause, Keymanthri Moodley, Suerie Moon, Michael Parker, Carla Saenz, G. Owen Schaefer, Ehsan Shamsi-Gooshki, Jerome Singh, Beatriz Thomé, Teck Chuan Voo, Jonathan Wolff, and Xiaomei Zhai.

  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 3. Council for International Organizations of Medical Sciences, World Health Organization. International Ethical Guidelines for Health-related Research Involving Humans. Fourth Ed. Geneva: Council for International Organizations of Medical Sciences; 2016.
  • 13. Schneider CE. The Censor’s Hand: The Misregulation of Human-Subject Research. 1st edition. Cambridge, Massachusetts: The MIT Press; 2015.
  • 38. World Health Organization. Guidance for managing ethical issues in infectious disease outbreaks. Geneva; 2016. Available: https://apps.who.int/iris/handle/10665/250580 .
  • 46. World Health Organization. WHO tool for benchmarking ethics oversight of health-related research with human participants (Draft). Geneva; 2022. Available: https://www.who.int/publications/m/item/who-tool-for-benchmarking-ethics-oversight-of-health-related-research-with-human-participants .
  • Technical Support
  • Find My Rep

You are here

Research Ethics

Research Ethics

Preview this book.

  • Description
  • Aims and Scope
  • Editorial Board
  • Abstracting / Indexing
  • Submission Guidelines

Research Ethics provides a platform for sharing experiences and analysis of ethical issues that are related to the design, conduct, impact and oversight of research. Through open and transparent narrative and analysis of ethical issues in research, it serves to raise awareness, challenge assumptions and help find solutions for complex ethical issues.

Ethical issues are not limited to a specific discipline or type of research. The Editors welcome submissions from any research field (for instance, biomedical, social science, environmental, information technology, or the arts) and a broad range of methodological approaches (for instance, clinical trials, animal experimentation, qualitative studies, laboratory or desk-based research).

Some examples of the topics addressed in Research Ethics include:

  • Ethical issues related to the inclusion of vulnerable populations in research
  • Ethics dumping
  • Benefit sharing
  • Ethical issues related to social media research
  • Research integrity and research misconduct
  • The education and training of researchers and ethics reviewers
  • The development and implementation of governance mechanisms

In addition to these applied ethics issues, the journal also welcomes original theoretical papers that contribute to the debate around the normative underpinnings or ethical frameworks for research ethics.

Research Ethics publishes original papers and review articles as well as informative case studies and offers a home for submissions from authors from around the world. The quality of submitted articles is evaluated independently by double-blind peer review. This journal is a member of the Committee on Publication Ethics (COPE).

Research Ethics is aimed at readers and authors who are interested in ethical issues associated with the design and conduct of research, the regulation of research, the procedures and process of ethical review and issues related to scientific integrity. The journal aims to promote, inspire, host and engage in open and public debate about research ethics on an international scale and to contribute to the education of researchers and reviewers of research.

Research Ethics offers a home for submissions from authors from around the world and publishes original papers and review articles as well as informative case studies. The quality of submitted articles is evaluated independently by double-blind peer review.

  • Clarivate Analytics: Emerging Sources Citation Index (ESCI)
  • Directory of Open Access Journals (DOAJ)
  • ERIC (Education Resources Information Center)

Manuscript Submission Guidelines: Research Ethics

This Journal is a member of the Committee on Publication Ethics .

Please read the guidelines below then visit the Journal’s submission site  http://mc.manuscriptcentral.com/rea  to upload your manuscript. Please note that manuscripts not conforming to these guidelines may be returned.

Only manuscripts of sufficient quality that meet the aims and scope of  Research Ethics  will be reviewed.

All articles published in Research Ethics are published fully open access under a Creative Commons licence and available worldwide, with readers having barrier-free access to the full-text articles immediately upon publication. From 1st January 2024, an article processing charge (APC) is levied on all article types that are published in the journal.

For authors that are currently eligible for an  Open Access Agreement at your institution with Sage , your article will be published at either no direct cost to you or at a deeply discounted rate, depending on the terms of the agreement.

Authors based at institutions in developing countries may also be eligible for an APC waiver, please see our  website page on Gold Open Access Article Processing Charge Waivers  for further information. Please refer to  our page regarding our partnerships around the world  if you would like to know more about the Research4Life initiative.

For authors not eligible for a Sage Open Access Agreement, the APC is USD $500.

  • Open Access
  • Article processing charge (APC)
  • What do we publish? 3.1 Aims & Scope 3.2 Article types 3.3 Writing your paper
  • Editorial policies 4.1 Peer review policy 4.2 Authorship 4.3 Acknowledgements 4.4 Declaration of conflicting interests 4.5 Research ethics and patient consent 4.6 Clinical trials
  • Publishing policies 5.1 Publication ethics 5.2 Contributor's publishing agreement
  • Preparing your manuscript 6.1 Main File 6.2 Title Page 6.3 Formatting 6.4 Language 6.5 Artwork, figures and other graphics 6.6 Supplementary material 6.7 Reference style 6.8 English language editing services
  • Submitting your manuscript 7.1 ORCID 7.2 Title, keywords and abstracts 7.3  Information required for completing your submission 7.4  Permissions
  • On acceptance and publication 8.1 Sage Production 8.2 Online publication 8.3  Promoting your article
  • Further information

1. Open Access

Research Ethics is an open access, peer-reviewed journal. Each article accepted by peer review is made freely available online immediately upon publication, is published under a Creative Commons license and will be hosted online in perpetuity. 

For general information on open access at Sage please visit the Open Access page or view our Open Access FAQs.

Back to top

2. Article processing charge (APC)

For authors not eligible for a Sage Open Access Agreement, the article processing charge (APC) is USD $500.

3. What do we publish?

3.1 Aims & Scope

Before submitting your manuscript to Research Ethics , please ensure you have read the  Aims & Scope .

3.2 Article Types

Research Ethics  publishes original articles and review articles as well as informative case studies on the ethical issues associated with the design and conduct of research, the regulation of research, the procedures and process of ethical review, and issues related to scientific integrity. The journal encourages the submission of the following types of manuscripts:

  • Original articles: these manuscripts present original empirical content and/or an original theoretical perspective. Submissions that present original empirical content should not exceed 12,000 words (including references). Submissions that present an original theoretical perspective should not exceed 6,000 words (including references). Longer manuscripts may occasionally be considered at the discretion of the Editors.
  • Topic Pieces: these articles are intended to form a snapshot of, or perspective on, contemporary thinking on a topic or issue in research ethics or research integrity. Submissions of topic pieces should be no longer than 2,000 words (including references). 
  • Case studies: these articles provide examples of real-world ethical challenges in research or research ethics review, as well as real-world case studies in research integrity or research misconduct. Case studies should include ethics analysis of the identified challenges and, where possible, recommendations for dealing with them. Submissions of case studies should be no longer than 3,000 words (including references).
  • Review articles: these articles address key issues in research ethics or integrity with a focus on under-researched topics. Rather than introducing new material, review articles offer critical analysis of specific issues in particular areas with significant referencing to existing published literature. Review articles should be between 3,000 and 6,000 words (including references).

3.3 Writing your paper

The Sage Author Gateway has some general advice and on  how to get published , plus links to further resources. Sage Author Services also offers authors a variety of ways to improve and enhance their article including English language editing, plagiarism detection, and video abstract and infographic preparation.

3.3.1 Make your article discoverable

When writing up your paper, think about how you can make it discoverable. The title, keywords and abstract are key to ensuring readers find your article through search engines such as Google. For information and guidance on how best to title your article, write your abstract and select your keywords, have a look at this page on the Gateway: How to Help Readers Find Your Article Online .

4. Editorial policies

4 .1 Peer review policy

Manuscripts submitted for publication in Research Ethics are subject to anonymised peer review.

Submissions are initially reviewed by the editors, who determine whether the submission meets the basic guidelines (e.g. addresses a topic relevant to research ethics or research integrity, is written in English, and is sufficiently understandable). If so, the editors will consult among themselves to determine if the submission, based on its quality, should go on to peer review. Submissions passing this first level of review will be assigned to at least two peer reviewers with the appropriate expertise. The reviewers assess submissions based on scholarly quality, relevance, timeliness, novelty, importance, engagement with the relevant literature, and similar factors. Upon receipt of the peer review responses, the editors will decide whether the manuscript should be rejected, or accepted with or without required revisions.

Research Ethics operates a strictly fully anonymous peer review process in which the reviewer’s name is withheld from the author and, the author’s name from the reviewer. The reviewer may at their own discretion opt to reveal their name to the author in their review, but our standard policy practice is for both identities to remain concealed. The desired turnaround time for manuscripts is a maximum of eight weeks from submission to initial decision. If the manuscript is accepted for publication, the editors reserve the right to make minor adjustments (e.g. grammar, tone) and, if necessary, to shorten the manuscript without changing the meaning.

4.2 Authorship

All parties who have made a substantive contribution to the manuscript should be listed as authors. Principal authorship, authorship order, and other publication credits should be based on the relative scientific or professional contributions of the individuals involved, regardless of their status. A student is usually listed as principal author on any multiple-authored publication that substantially derives from the student’s dissertation or thesis. See also the COPE discussion document on authorship, available here .

Please note that AI chatbots and Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria. Use of any AI chatbot or LLM in helping prepare a manuscript for submission should be clearly indicated in the manuscript, including which model was used and for what purpose. Please use the methods or acknowledgements section, as appropriate. For more information see the  Sage policy on Use of ChatGPT and Generative AI .

4.3 Acknowledgements

Contributors or advisors who do not meet the criteria for authorship can be listed in an Acknowledgements section. Examples of those who might be acknowledged include a person who provided purely technical help, general support or feedback on an early draft. Please ensure that persons who are acknowledged have given permission for mention in the article and upload their confirmation (as supplementary materials) at submission.

4.3.1 Third party submissions

Where an individual who is not listed as an author submits a manuscript on behalf of the author(s), a statement must be included in the Acknowledgements section of the manuscript and in the accompanying cover letter. The statements must:

  • Disclose this type of editorial assistance – including the individual’s name, company and level of input
  • Identify any entities that paid for this assistance
  • Confirm that the listed authors have authorized the submission of their manuscript via third party and approved any statements or declarations, e.g. conflicting interests, funding, etc.

Where appropriate, Sage reserves the right to deny consideration to manuscripts submitted by a third party rather than by the authors themselves .

4.3.2 Writing assistance

Individuals who provided writing assistance, e.g. from a specialist communications company, do not qualify as authors and so should only be included in the Acknowledgements section. Authors must disclose any writing assistance – including the individual’s name, company and level of input – and identify the entity that paid for this assistance.

It is not necessary to disclose use of language polishing services.

4.4 Declaration of conflicting interests

It is the policy of Research Ethics to require a declaration of conflicting interests from all authors enabling a statement to be carried within the paginated pages of all published articles.

The declaration of conflicting interests must be provided at the submission stage in two ways:

  • Authors should provide a full statement disclosing the existence of any financial or non-financial interest within the title page and/or cover letter, which is not sent to reviewers, to detail these and to declare any potential conflicts of interest to the editor.
  • Authors should provide a minimal statement (either "The authors declare the existence of a financial/non-financial competing interest" OR "The authors declare no competing interests") within the anonymised manuscript. The minimal statement will be shared with peer-reviewers and must not include any information which may enable the disclosure of author identities.

In addition to any declarations in the submission system, all authors are required to include a ‘Declaration of Conflicting Interests’ statement at the end of their published article using one of the following standard sentences:

  • The authors declare the following competing interests.
  • The authors declare no competing interests.

For guidance on conflict of interest statements, please see Sage’s Publishing Policies and the ICMJE recommendations here .

4.5 Research ethics and patient approvals

It is the policy of Research Ethics to require a declaration about research ethics approval enabling a statement to be carried within the paginated pages of all published articles. To ensure anonymous peer-review, please include these details on the title page of your submission. This should include the name of the ethics committee that approved the study and, where possible, the date of approval and reference number of the application.

Should research ethics approval not be relevant/required for your study/article, please include a statement to this effect at the submission stage, along with any accompanying evidence if possible (e.g. relevant link to policy position of institution or legal framework clarifying ethics approval is not required or expected for the kind of study/article being submitted). The following standard sentence can be used:

  • The authors declare that research ethics approval was not required for this study.

4.5.1 Medical research

Medical research involving human participants or data must be conducted according to the World Medical Association Declaration of Helsinki .

Submitted manuscripts should conform to the ICMJE Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals , and all articles reporting on studies involving human participants must state that the relevant ethics committee provided ethics approval (or waived its requirement). Please ensure that you have provided the full name and institution of the ethics committee, in addition to the approval number and date of approval. Please include these details on the title page of your submission. For research articles, authors are also required to state in the methods section whether participants provided informed consent and whether the consent was written or verbal.

Information on informed consent to report on individual cases or case series should be included in the manuscript text. A statement is required regarding whether written informed consent for patient information and images to be published was provided by the participant(s) or a legally authorized representative.

Please also refer to the ICMJE Recommendations for the Protection of Research Participants

 4.5.2 Non-medical research involving humans and human data

All manuscripts reporting studies with humans or human data, including studies that involve primary collection of personal data such as surveys or interviews, must state the relevant ethics committee provided (or waived) approval.

If ethics approval was obtained, please provide the name(s) of the ethics committee(s)/IRB(s) plus the approval number(s)/ID(s). If the study received exemption from ethics approval, please provide the name(s) of the ethics committee(s)/IRB(s) or other authorized body and the reason for exemption. If ethics approval was not sought for the present study, please specify why it was not required and cite the relevant guidelines or legislation where applicable, for the benefit of an international readership. Please include these details on the title page of your submission.

For empirical research articles, authors are also required to state in the methods section whether participants provided informed consent and whether the consent was written or verbal.

If you are unsure whether ethics approval is required for your study, please refer to this Editorial .

4.5.3 Animal studies

All manuscripts reporting on studies with animals must provide details of the relevant research ethics approval (or waiver). Please include these details on the title page of your submission.

4.6 Clinical trials

Many Sage journals conform to the  ICMJE requirement  that clinical trials are registered in a WHO-approved public trials registry at or before the time of first patient enrolment as a condition of consideration for publication. The trial registry name and URL, and registration number must be included at the end of the abstract.

Further to the above, other Sage journals may consider retrospectively registered trials if the justification for late registration is acceptable, consistent with the  AllTrials campaign . The trial registry name and URL, and registration number must be included at the end of the abstract.

5. Publishing Policies

5.1 Publication ethics

Sage is committed to upholding the integrity of the academic record. We encourage authors to refer to the Committee on Publication Ethics’ International Standards for Authors and view the Publication Ethics page on the Sage Author Gateway .

5.1.1 Plagiarism

Research Ethics and Sage take issues of copyright infringement, plagiarism or other breaches of best practice in publication very seriously. We seek to protect the rights of our authors and we always investigate claims of plagiarism or misuse of published articles. Equally, we seek to protect the reputation of the journal against malpractice. Submitted articles may be checked with duplication-checking software. Where an article, for example, is found to have plagiarised other work or included third-party copyright material without permission or with insufficient acknowledgement, or where the authorship of the article is contested, we reserve the right to take action including, but not limited to: publishing an erratum or corrigendum (correction); retracting the article; taking up the matter with the head of department or dean of the author's institution and/or relevant academic bodies or societies; or taking appropriate legal action.

5.1.2 Prior publication

If material has been previously published it is not generally acceptable for publication in a Sage journal. However, there are certain circumstances where previously published material can be considered for publication. Please refer to the guidance on the Sage Author Gateway or if in doubt, contact the Editor at the address given below.

5.2 Contributor's publishing agreement

Before publication Sage requires the author as the rights holder to sign a Journal Contributor’s Publishing Agreement. Research Ethics publishes manuscripts under Creative Commons licenses . The standard license for the journal is Creative Commons by Attribution Non-Commercial (CC BY-NC), which allows others to re-use the work without permission as long as the work is properly referenced and the use is non-commercial. For more information, you are advised to visit Sage's OA licenses page .

Alternative license arrangements are available, for example, to meet particular funder mandates, made at the author’s request.

6. Preparing your manuscript for submission

You will be asked to upload your anonymised manuscript separately from a title page. Please take note of the requirements for preparation for each of these documents.

6.1 Main file

Your main file is your anonymised manuscript. Please ensure that you do not include any identifiable information in your main manuscript. It will consist of the body of your work plus references. You do not need to include the title or abstract as these will be added during the submission process.

6.2 Title page

Separate from your anonymised manuscript (main file), please also include a separate title page with the following information:

  • Name of all the authors with institutional affiliations and contact details
  • Ethics approval information
  • Conflict of interest information
  • Funding information
  • Acknowledgements

6.3 Formatting

The preferred format for your manuscript is Word. Files should be submitted in a .doc or .docx file format. Word templates are available on the Manuscript Submission Guidelines page of our Author Gateway. It is preferable that all submissions be typed in sans serif font (e.g. Calibri, Helvetica). Double-spacing is also preferred. Keep formatting simple, and avoid unnecessary advanced word processing features, justification, linked objects, or creating your own symbols.

6.4 Language

All manuscripts must be written in English.

6.4.1 Terminology

Please note that at Research Ethics we actively discourage use of the term ‘research subjects’ when referring to studies that involve human participants. The word ‘subjects’ should only be used when referring to the processing of data (as in ‘data subjects’). As an alternative, please use: humans, persons, participants, research participants or human participants.

6.5 Artwork, figures and other graphics

For guidance on the preparation of illustrations, pictures and graphs in electronic format, please visit Sage’s Manuscript Submission Guidelines .

Figures supplied in colour will appear in colour online.

6.6 Supplementary material

This journal is able to host additional materials online (e.g. datasets, podcasts, videos, images etc) alongside the full-text of the article. For more information please refer to our guidelines on submitting supplementary files .

6.7 Reference style

Research Ethics adheres to the Sage Harvard reference style. View the Sage Harvard guidelines to ensure your manuscript conforms to this reference style.

If you use EndNote to manage references, you can download the Sage Harvard EndNote output file .

6.8 English language editing services

Authors seeking assistance with English language editing, translation, or figure and manuscript formatting to fit the journal’s specifications should consider using Sage Language Services. Visit Sage Language Services on our Journal Author Gateway for further information.

7. Submitting your manuscript

Research Ethics is hosted on Sage Track, a web based online submission and peer review system powered by ScholarOne™ Manuscripts. Visit http://mc.manuscriptcentral.com/rea to login and submit your article online.

IMPORTANT: Please check whether you already have an account in the system before trying to create a new one. If you have reviewed or authored for the journal in the past year it is likely that you will have had an account created.  For further guidance on submitting your manuscript online please visit ScholarOne Online Help .

As part of our commitment to ensuring an ethical, transparent and fair peer review process Sage is a supporting member of ORCID, the Open Researcher and Contributor ID . ORCID provides a unique and persistent digital identifier that distinguishes researchers from every other researcher, even those who share the same name, and, through integration in key research workflows such as manuscript and grant submission, supports automated linkages between researchers and their professional activities, ensuring that their work is recognized. 

The collection of ORCID iDs from corresponding authors is now part of the submission process of this journal. If you already have an ORCID iD you will be asked to associate that to your submission during the online submission process. We also strongly encourage all co-authors to link their ORCID iD to their accounts in our online peer review platforms. It takes seconds to do: click the link when prompted, sign into your ORCID account and our systems are automatically updated. Your ORCID iD will become part of your accepted publication’s metadata, making your work attributable to you and only you. Your ORCID iD is published with your article so that fellow researchers reading your work can link to your ORCID profile, and from there link to your other publications.

If you do not already have an ORCID iD please follow this link to create one or visit our ORCID homepage to learn more.

7.2 Title, keywords and abstracts

Please supply a title, an abstract and keywords to accompany your manuscript. The title, keywords and abstract are key to ensuring readers find your article online through online search engines such as Google. Please refer to the information and guidance on how best to title your article, write your abstract and select your keywords by visiting the Sage Journal Author Gateway for guidelines on How to Help Readers Find Your Article Online.

All manuscripts should include up to 6 keywords (in alphabetical order) and an abstract of up to 250 words, which is a condensation of the manuscript that contains a statement of purpose, a description of the content, argument, or analysis, and a concise summary of conclusions.

7.3 Co-authors

You will be asked to provide contact details and affiliations for all co-authors via the submission system and identify who is to be the corresponding author. These details must match what appears on your manuscript.

7.3.1. Ethics approval documentation during manuscript submission

As noted above, for all studies requiring ethics approval, evidence of research ethics approval must be uploaded with your manuscript files during the submission stage. This must show (as a minimum) the name of the ethics committee, the name of the study, the name of the applicant, the ethics approval number/ID, and the date of approval. If an ethics waiver was obtained, please upload evidence of the waiver to include the reason for the waiver.

In your anonymised main manuscript file, please also include a statement (under a Methods section or a separate section on Ethics Approval) whether your study received ethics approval or a waiver, but ensure that any information that could identify the specific institution or committee that provided the approval or waiver is also anonymised, e.g. “This study received ethics approval from [anonymised] on 10 October 2023."

7.4 Permissions

Please also ensure that you have obtained any necessary permission from copyright holders for reproducing any illustrations, tables, figures or lengthy quotations previously published elsewhere. For further information including guidance on fair dealing for criticism and review, please see the Copyright and Permissions page on the Sage Author Gateway .

8. On acceptance and publication

8.1 Sage Production

If your article is accepted, your Sage Production Editor will keep you informed as to your article’s progress throughout the production process. Proofs will be sent by PDF to the corresponding author and should be returned promptly. Authors are reminded to check their proofs carefully to confirm that all author information, including names, affiliations, sequence and contact details are correct, and that Funding and Conflict of Interest statements, if any, are accurate. Please note that if there are any changes to the author list at this stage all authors will be required to complete and sign a form authorising the change.

8.2 Online publication

One of the many benefits of publishing your research in an open access journal is the speed to publication. Your article will be published online in a fully citable form with a DOI number as soon as it has completed the production process. At this time it will be completely free to view and download for all.

8.3 Promoting your article

Publication is not the end of the process! You can help disseminate your paper and ensure it is as widely read and cited as possible. The Sage Author Gateway has numerous resources to help you promote your work. Visit the Promote Your Article  page on the Gateway for tips and advice. 

9. Further information

Any correspondence, queries or additional requests for information on the manuscript submission process should be sent to the Research Ethics editorial office as follows:

The Editors, Research Ethics, [email protected]

  • Read Online
  • Current Issue
  • Email Alert
  • Permissions
  • Foreign rights
  • Reprints and sponsorship
  • Advertising
  • Research article
  • Open access
  • Published: 30 April 2021

A scoping review of the literature featuring research ethics and research integrity cases

  • Anna Catharina Vieira Armond   ORCID: orcid.org/0000-0002-7121-5354 1 ,
  • Bert Gordijn 2 ,
  • Jonathan Lewis 2 ,
  • Mohammad Hosseini 2 ,
  • János Kristóf Bodnár 1 ,
  • Soren Holm 3 , 4 &
  • Péter Kakuk 5  

BMC Medical Ethics volume  22 , Article number:  50 ( 2021 ) Cite this article

14k Accesses

25 Citations

28 Altmetric

Metrics details

The areas of Research Ethics (RE) and Research Integrity (RI) are rapidly evolving. Cases of research misconduct, other transgressions related to RE and RI, and forms of ethically questionable behaviors have been frequently published. The objective of this scoping review was to collect RE and RI cases, analyze their main characteristics, and discuss how these cases are represented in the scientific literature.

The search included cases involving a violation of, or misbehavior, poor judgment, or detrimental research practice in relation to a normative framework. A search was conducted in PubMed, Web of Science, SCOPUS, JSTOR, Ovid, and Science Direct in March 2018, without language or date restriction. Data relating to the articles and the cases were extracted from case descriptions.

A total of 14,719 records were identified, and 388 items were included in the qualitative synthesis. The papers contained 500 case descriptions. After applying the eligibility criteria, 238 cases were included in the analysis. In the case analysis, fabrication and falsification were the most frequently tagged violations (44.9%). The non-adherence to pertinent laws and regulations, such as lack of informed consent and REC approval, was the second most frequently tagged violation (15.7%), followed by patient safety issues (11.1%) and plagiarism (6.9%). 80.8% of cases were from the Medical and Health Sciences, 11.5% from the Natural Sciences, 4.3% from Social Sciences, 2.1% from Engineering and Technology, and 1.3% from Humanities. Paper retraction was the most prevalent sanction (45.4%), followed by exclusion from funding applications (35.5%).

Conclusions

Case descriptions found in academic journals are dominated by discussions regarding prominent cases and are mainly published in the news section of journals. Our results show that there is an overrepresentation of biomedical research cases over other scientific fields compared to its proportion in scientific publications. The cases mostly involve fabrication, falsification, and patient safety issues. This finding could have a significant impact on the academic representation of misbehaviors. The predominance of fabrication and falsification cases might diverge the attention of the academic community from relevant but less visible violations, and from recently emerging forms of misbehaviors.

Peer Review reports

There has been an increase in academic interest in research ethics (RE) and research integrity (RI) over the past decade. This is due, among other reasons, to the changing research environment with new and complex technologies, increased pressure to publish, greater competition in grant applications, increased university-industry collaborative programs, and growth in international collaborations [ 1 ]. In addition, part of the academic interest in RE and RI is due to highly publicized cases of misconduct [ 2 ].

There is a growing body of published RE and RI cases, which may contribute to public attitudes regarding both science and scientists [ 3 ]. Different approaches have been used in order to analyze RE and RI cases. Studies focusing on ORI files (Office of Research Integrity) [ 2 ], retracted papers [ 4 ], quantitative surveys [ 5 ], data audits [ 6 ], and media coverage [ 3 ] have been conducted to understand the context, causes, and consequences of these cases.

Analyses of RE and RI cases often influence policies on responsible conduct of research [ 1 ]. Moreover, details about cases facilitate a broader understanding of issues related to RE and RI and can drive interventions to address them. Currently, there are no comprehensive studies that have collected and evaluated the RE and RI cases available in the academic literature. This review has been developed by members of the EnTIRE consortium to generate information on the cases that will be made available on the Embassy of Good Science platform ( www.embassy.science ). Two separate analyses have been conducted. The first analysis uses identified research articles to explore how the literature presents cases of RE and RI, in relation to the year of publication, country, article genre, and violation involved. The second analysis uses the cases extracted from the literature in order to characterize the cases and analyze them concerning the violations involved, sanctions, and field of science.

This scoping review was performed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement and PRISMA Extension for Scoping Reviews (PRISMA-ScR). The full protocol was pre-registered and it is available at https://ec.europa.eu/research/participants/documents/downloadPublic?documentIds=080166e5bde92120&appId=PPGMS .

Eligibility

Articles with non-fictional case(s) involving a violation of, or misbehavior, poor judgment, or detrimental research practice in relation to a normative framework, were included. Cases unrelated to scientific activities, research institutions, academic or industrial research and publication were excluded. Articles that did not contain a substantial description of the case were also excluded.

A normative framework consists of explicit rules, formulated in laws, regulations, codes, and guidelines, as well as implicit rules, which structure local research practices and influence the application of explicitly formulated rules. Therefore, if a case involves a violation of, or misbehavior, poor judgment, or detrimental research practice in relation to a normative framework, then it does so on the basis of explicit and/or implicit rules governing RE and RI practice.

Search strategy

A search was conducted in PubMed, Web of Science, SCOPUS, JSTOR, Ovid, and Science Direct in March 2018, without any language or date restrictions. Two parallel searches were performed with two sets of medical subject heading (MeSH) terms, one for RE and another for RI. The parallel searches generated two sets of data thereby enabling us to analyze and further investigate the overlaps in, differences in, and evolution of, the representation of RE and RI cases in the academic literature. The terms used in the first search were: (("research ethics") AND (violation OR unethical OR misconduct)). The terms used in the parallel search were: (("research integrity") AND (violation OR unethical OR misconduct)). The search strategy’s validity was tested in a pilot search, in which different keyword combinations and search strings were used, and the abstracts of the first hundred hits in each database were read (Additional file 1 ).

After searching the databases with these two search strings, the titles and abstracts of extracted items were read by three contributors independently (ACVA, PK, and KB). Articles that could potentially meet the inclusion criteria were identified. After independent reading, the three contributors compared their results to determine which studies were to be included in the next stage. In case of a disagreement, items were reassessed in order to reach a consensus. Subsequently, qualified items were read in full.

Data extraction

Data extraction processes were divided by three assessors (ACVA, PK and KB). Each list of extracted data generated by one assessor was cross-checked by the other two. In case of any inconsistencies, the case was reassessed to reach a consensus. The following categories were employed to analyze the data of each extracted item (where available): (I) author(s); (II) title; (III) year of publication; (IV) country (according to the first author's affiliation); (V) article genre; (VI) year of the case; (VII) country in which the case took place; (VIII) institution(s) and person(s) involved; (IX) field of science (FOS-OECD classification)[ 7 ]; (X) types of violation (see below); (XI) case description; and (XII) consequences for persons or institutions involved in the case.

Two sets of data were created after the data extraction process. One set was used for the analysis of articles and their representation in the literature, and the other set was created for the analysis of cases. In the set for the analysis of articles, all eligible items, including duplicate cases (cases found in more than one paper, e.g. Hwang case, Baltimore case) were included. The aim was to understand the historical aspects of violations reported in the literature as well as the paper genre in which cases are described and discussed. For this set, the variables of the year of publication (III); country (IV); article genre (V); and types of violation (X) were analyzed.

For the analysis of cases, all duplicated cases and cases that did not contain enough information about particularities to differentiate them from others (e.g. names of the people or institutions involved, country, date) were excluded. In this set, prominent cases (i.e. those found in more than one paper) were listed only once, generating a set containing solely unique cases. These additional exclusion criteria were applied to avoid multiple representations of cases. For the analysis of cases, the variables: (VI) year of the case; (VII) country in which the case took place; (VIII) institution(s) and person(s) involved; (IX) field of science (FOS-OECD classification); (X) types of violation; (XI) case details; and (XII) consequences for persons or institutions involved in the case were considered.

Article genre classification

We used ten categories to capture the differences in genre. We included a case description in a “news” genre if a case was published in the news section of a scientific journal or newspaper. Although we have not developed a search strategy for newspaper articles, some of them (e.g. New York Times) are indexed in scientific databases such as Pubmed. The same method was used to allocate case descriptions to “editorial”, “commentary”, “misconduct notice”, “retraction notice”, “review”, “letter” or “book review”. We applied the “case analysis” genre if a case description included a normative analysis of the case. The “educational” genre was used when a case description was incorporated to illustrate RE and RI guidelines or institutional policies.

Categorization of violations

For the extraction process, we used the articles’ own terminology when describing violations/ethical issues involved in the event (e.g. plagiarism, falsification, ghost authorship, conflict of interest, etc.) to tag each article. In case the terminology was incompatible with the case description, other categories were added to the original terminology for the same case. Subsequently, the resulting list of terms was standardized using the list of major and minor misbehaviors developed by Bouter and colleagues [ 8 ]. This list consists of 60 items classified into four categories: Study design, data collection, reporting, and collaboration issues. (Additional file 2 ).

Systematic search

A total of 11,641 records were identified through the RE search and 3078 in the RI search. The results of the parallel searches were combined and the duplicates removed. The remaining 10,556 records were screened, and at this stage, 9750 items were excluded because they did not fulfill the inclusion criteria. 806 items were selected for full-text reading. Subsequently, 388 articles were included in the qualitative synthesis (Fig.  1 ).

figure 1

Flow diagram

Of the 388 articles, 157 were only identified via the RE search, 87 exclusively via the RI search, and 144 were identified via both search strategies. The eligible articles contained 500 case descriptions, which were used for the analysis of the publications articles analysis. 256 case descriptions discussed the same 50 cases. The Hwang case was the most frequently described case, discussed in 27 articles. Furthermore, the top 10 most described cases were found in 132 articles (Table 1 ).

For the analysis of cases, 206 (41.2% of the case descriptions) duplicates were excluded, and 56 (11.2%) cases were excluded for not providing enough information to distinguish them from other cases, resulting in 238 eligible cases.

Analysis of the articles

The categories used to classify the violations include those that pertain to the different kinds of scientific misconduct (falsification, fabrication, plagiarism), detrimental research practices (authorship issues, duplication, peer-review, errors in experimental design, and mentoring), and “other misconduct” (according to the definitions from the National Academies of Sciences and Medicine, [ 1 ]). Each case could involve more than one type of violation. The majority of cases presented more than one violation or ethical issue, with a mean of 1.56 violations per case. Figure  2 presents the frequency of each violation tagged to the articles. Falsification and fabrication were the most frequently tagged violations. The violations accounted respectively for 29.1% and 30.0% of the number of taggings (n = 780), and they were involved in 46.8% and 45.4% of the articles (n = 500 case descriptions). Problems with informed consent represented 9.1% of the number of taggings and 14% of the articles, followed by patient safety (6.7% and 10.4%) and plagiarism (5.4% and 8.4%). Detrimental research practices, such as authorship issues, duplication, peer-review, errors in experimental design, mentoring, and self-citation were mentioned cumulatively in 7.0% of the articles.

figure 2

Tagged violations from the article analysis

Analysis of the cases

Figure  3 presents the frequency and percentage of each violation found in the cases. Each case could include more than one item from the list. The 238 cases were tagged 305 times, with a mean of 1.28 items per case. Fabrication and falsification were the most frequently tagged violations (44.9%), involved in 57.7% of the cases (n = 238). The non-adherence to pertinent laws and regulations, such as lack of informed consent and REC approval, was the second most frequently tagged violation (15.7%) and involved in 20.2% of the cases. Patient safety issues were the third most frequently tagged violations (11.1%), involved in 14.3% of the cases, followed by plagiarism (6.9% and 8.8%). The list of major and minor misbehaviors [ 8 ] classifies the items into study design, data collection, reporting, and collaboration issues. Our results show that 56.0% of the tagged violations involved issues in reporting, 16.4% in data collection, 15.1% involved collaboration issues, and 12.5% in the study design. The items in the original list that were not listed in the results were not involved in any case collected.

figure 3

Major and minor misbehavior items from the analysis of cases

Article genre

The articles were mostly classified into “news” (33.0%), followed by “case analysis” (20.9%), “editorial” (12.1%), “commentary” (10.8%), “misconduct notice” (10.3%), “retraction notice” (6.4%), “letter” (3.6%), “educational paper” (1.3%), “review” (1%), and “book review” (0.3%) (Fig.  4 ). The articles classified into “news” and “case analysis” included predominantly prominent cases. Items classified into “news” often explored all the investigation findings step by step for the associated cases as the case progressed through investigations, and this might explain its high prevalence. The case analyses included mainly normative assessments of prominent cases. The misconduct and retraction notices included the largest number of unique cases, although a relatively large portion of the retraction and misconduct records could not be included because of insufficient case details. The articles classified into “editorial”, “commentary” and “letter” also included unique cases.

figure 4

Article genre of included articles

Article analysis

The dates of the eligible articles range from 1983 to 2018 with notable peaks between 1990 and 1996, most probably associated with the Gallo [ 9 ] and Imanishi-Kari cases [ 10 ], and around 2005 with the Hwang [ 11 ], Wakefield [ 12 ], and CNEP trial cases [ 13 ] (Fig.  5 ). The trend line shows an increase in the number of articles over the years.

figure 5

Frequency of articles according to the year of publication

Case analysis

The dates of included cases range from 1798 to 2016. Two cases occurred before 1910, one in 1798 and the other in 1845. Figure  6 shows the number of cases per year from 1910. An increase in the curve started in the early 1980s, reaching the highest frequency in 2004 with 13 cases.

figure 6

Frequency of cases per year

Geographical distribution

The first analysis concerned the authors’ affiliation and the corresponding author’s address. Where the article contained more than one country in the affiliation list, only the first author’s location was considered. Eighty-one articles were excluded because the authors’ affiliations were not available, and 307 articles were included in the analysis. The articles originated from 26 different countries (Additional file 3 ). Most of the articles emanated from the USA and the UK (61.9% and 14.3% of articles, respectively), followed by Canada (4.9%), Australia (3.3%), China (1.6%), Japan (1.6%), Korea (1.3%), and New Zealand (1.3%). Some of the most discussed cases occurred in the USA; the Imanishi-Kari, Gallo, and Schön cases [ 9 , 10 ]. Intensely discussed cases are also associated with Canada (Fisher/Poisson and Olivieri cases), the UK (Wakefield and CNEP trial cases), South Korea (Hwang case), and Japan (RIKEN case) [ 12 , 14 ]. In terms of percentages, North America and Europe stand out in the number of articles (Fig.  7 ).

figure 7

Percentage of articles and cases by continent

The case analysis involved the location where the case took place, taking into account the institutions involved in the case. For cases involving more than one country, all the countries were considered. Three cases were excluded from the analysis due to insufficient information. In the case analysis, 40 countries were involved in 235 different cases (Additional file 4 ). Our findings show that most of the reported cases occurred in the USA and the United Kingdom (59.6% and 9.8% of cases, respectively). In addition, a number of cases occurred in Canada (6.0%), Japan (5.5%), China (2.1%), and Germany (2.1%). In terms of percentages, North America and Europe stand out in the number of cases (Fig.  7 ). To enable comparison, we have additionally collected the number of published documents according to country distribution, available on SCImago Journal & Country Rank [ 16 ]. The numbers correspond to the documents published from 1996 to 2019. The USA occupies the first place in the number of documents, with 21.9%, followed by China (11.1%), UK (6.3%), Germany (5.5%), and Japan (4.9%).

Field of science

The cases were classified according to the field of science. Four cases (1.7%) could not be classified due to insufficient information. Where information was available, 80.8% of cases were from the Medical and Health Sciences, 11.5% from the Natural Sciences, 4.3% from Social Sciences, 2.1% from Engineering and Technology, and 1.3% from Humanities (Fig.  8 ). Additionally, we have retrieved the number of published documents according to scientific field distribution, available on SCImago [ 16 ]. Of the total number of scientific publications, 41.5% are related to natural sciences, 22% to engineering, 25.1% to health and medical sciences, 7.8% to social sciences, 1.9% to agricultural sciences, and 1.7% to the humanities.

figure 8

Field of science from the analysis of cases

This variable aimed to collect information on possible consequences and sanctions imposed by funding agencies, scientific journals and/or institutions. 97 cases could not be classified due to insufficient information. 141 cases were included. Each case could potentially include more than one outcome. Most of cases (45.4%) involved paper retraction, followed by exclusion from funding applications (35.5%). (Table 2 ).

RE and RI cases have been increasingly discussed publicly, affecting public attitudes towards scientists and raising awareness about ethical issues, violations, and their wider consequences [ 5 ]. Different approaches have been applied in order to quantify and address research misbehaviors [ 5 , 17 , 18 , 19 ]. However, most cases are investigated confidentially and the findings remain undisclosed even after the investigation [ 19 , 20 ]. Therefore, the study aimed to collect the RE and RI cases available in the scientific literature, understand how the cases are discussed, and identify the potential of case descriptions to raise awareness on RE and RI.

We collected and analyzed 500 detailed case descriptions from 388 articles and our results show that they mostly relate to extensively discussed and notorious cases. Approximately half of all included cases was mentioned in at least two different articles, and the top ten most commonly mentioned cases were discussed in 132 articles.

The prominence of certain cases in the literature, based on the number of duplicated cases we found (e.g. Hwang case), can be explained by the type of article in which cases are discussed and the type of violation involved in the case. In the article genre analysis, 33% of the cases were described in the news section of scientific publications. Our findings show that almost all article genres discuss those cases that are new and in vogue. Once the case appears in the public domain, it is intensely discussed in the media and by scientists, and some prominent cases have been discussed for more than 20 years (Table 1 ). Misconduct and retraction notices were exceptions in the article genre analysis, as they presented mostly unique cases. The misconduct notices were mainly found on the NIH repository, which is indexed in the searched databases. Some federal funding agencies like NIH usually publicize investigation findings associated with the research they fund. The results derived from the NIH repository also explains the large proportion of articles from the US (61.9%). However, in some cases, only a few details are provided about the case. For cases that have not received federal funding and have not been reported to federal authorities, the investigation is conducted by local institutions. In such instances, the reporting of findings depends on each institution’s policy and willingness to disclose information [ 21 ]. The other exception involves retraction notices. Despite the existence of ethical guidelines [ 22 ], there is no uniform and a common approach to how a journal should report a retraction. The Retraction Watch website suggests two lists of information that should be included in a retraction notice to satisfy the minimum and optimum requirements [ 22 , 23 ]. As well as disclosing the reason for the retraction and information regarding the retraction process, optimal notices should include: (I) the date when the journal was first alerted to potential problems; (II) details regarding institutional investigations and associated outcomes; (III) the effects on other papers published by the same authors; (IV) statements about more recent replications only if and when these have been validated by a third party; (V) details regarding the journal’s sanctions; and (VI) details regarding any lawsuits that have been filed regarding the case. The lack of transparency and information in retraction notices was also noted in studies that collected and evaluated retractions [ 24 ]. According to Resnik and Dinse [ 25 ], retractions notices related to cases of misconduct tend to avoid naming the specific violation involved in the case. This study found that only 32.8% of the notices identify the actual problem, such as fabrication, falsification, and plagiarism, and 58.8% reported the case as replication failure, loss of data, or error. Potential explanations for euphemisms and vague claims in retraction notices authored by editors could pertain to the possibility of legal actions from the authors, honest or self-reported errors, and lack of resources to conduct thorough investigations. In addition, the lack of transparency can also be explained by the conflicts of interests of the article’s author(s), since the notices are often written by the authors of the retracted article.

The analysis of violations/ethical issues shows the dominance of fabrication and falsification cases and explains the high prevalence of prominent cases. Non-adherence to laws and regulations (REC approval, informed consent, and data protection) was the second most prevalent issue, followed by patient safety, plagiarism, and conflicts of interest. The prevalence of the five most tagged violations in the case analysis was higher than the prevalence found in the analysis of articles that involved the same violations. The only exceptions are fabrication and falsification cases, which represented 45% of the tagged violations in the analysis of cases, and 59.1% in the article analysis. This disproportion shows a predilection for the publication of discussions related to fabrication and falsification when compared to other serious violations. Complex cases involving these types of violations make good headlines and this follows a custom pattern of writing about cases that catch the public and media’s attention [ 26 ]. The way cases of RE and RI violations are explored in the literature gives a sense that only a few scientists are “the bad apples” and they are usually discovered, investigated, and sanctioned accordingly. This implies that the integrity of science, in general, remains relatively untouched by these violations. However, studies on misconduct determinants show that scientific misconduct is a systemic problem, which involves not only individual factors, but structural and institutional factors as well, and that a combined effort is necessary to change this scenario [ 27 , 28 ].

Analysis of cases

A notable increase in RE and RI cases occurred in the 1990s, with a gradual increase until approximately 2006. This result is in agreement with studies that evaluated paper retractions [ 24 , 29 ]. Although our study did not focus only on retractions, the trend is similar. This increase in cases should not be attributed only to the increase in the number of publications, since studies that evaluated retractions show that the percentage of retraction due to fraud has increased almost ten times since 1975, compared to the total number of articles. Our results also show a gradual reduction in the number of cases from 2011 and a greater drop in 2015. However, this reduction should be considered cautiously because many investigations take years to complete and have their findings disclosed. ORI has shown that from 2001 to 2010 the investigation of their cases took an average of 20.48 months with a maximum investigation time of more than 9 years [ 24 ].

The countries from which most cases were reported were the USA (59.6%), the UK (9.8%), Canada (6.0%), Japan (5.5%), and China (2.1%). When analyzed by continent, the highest percentage of cases took place in North America, followed by Europe, Asia, Oceania, Latin America, and Africa. The predominance of cases from the USA is predictable, since the country publishes more scientific articles than any other country, with 21.8% of the total documents, according to SCImago [ 16 ]. However, the same interpretation does not apply to China, which occupies the second position in the ranking, with 11.2%. These differences in the geographical distribution were also found in a study that collected published research on research integrity [ 30 ]. The results found by Aubert Bonn and Pinxten (2019) show that studies in the United States accounted for more than half of the sample collected, and although China is one of the leaders in scientific publications, it represented only 0.7% of the sample. Our findings can also be explained by the search strategy that included only keywords in English. Since the majority of RE and RI cases are investigated and have their findings locally disclosed, the employment of English keywords and terms in the search strategy is a limitation. Moreover, our findings do not allow us to draw inferences regarding the incidence or prevalence of misconduct around the world. Instead, it shows where there is a culture of publicly disclosing information and openly discussing RE and RI cases in English documents.

Scientific field analysis

The results show that 80.8% of reported cases occurred in the medical and health sciences whilst only 1.3% occurred in the humanities. This disciplinary difference has also been observed in studies on research integrity climates. A study conducted by Haven and colleagues, [ 28 ] associated seven subscales of research climate with the disciplinary field. The subscales included: (1) Responsible Conduct of Research (RCR) resources, (2) regulatory quality, (3) integrity norms, (4) integrity socialization, (5) supervisor/supervisee relations, (6) (lack of) integrity inhibitors, and (7) expectations. The results, based on the seven subscale scores, show that researchers from the humanities and social sciences have the lowest perception of the RI climate. By contrast, the natural sciences expressed the highest perception of the RI climate, followed by the biomedical sciences. There are also significant differences in the depth and extent of the regulatory environments of different disciplines (e.g. the existence of laws, codes of conduct, policies, relevant ethics committees, or authorities). These findings corroborate our results, as those areas of science most familiar with RI tend to explore the subject further, and, consequently, are more likely to publish case details. Although the volume of published research in each research area also influences the number of cases, the predominance of medical and health sciences cases is not aligned with the trends regarding the volume of published research. According to SCImago Journal & Country Rank [ 16 ], natural sciences occupy the first place in the number of publications (41,5%), followed by the medical and health sciences (25,1%), engineering (22%), social sciences (7,8%), and the humanities (1,7%). Moreover, biomedical journals are overrepresented in the top scientific journals by IF ranking, and these journals usually have clear policies for research misconduct. High-impact journals are more likely to have higher visibility and scrutiny, and consequently, more likely to have been the subject of misconduct investigations. Additionally, the most well-known general medical journals, including NEJM, The Lancet, and the BMJ, employ journalists to write their news sections. Since these journals have the resources to produce extensive news sections, it is, therefore, more likely that medical cases will be discussed.

Violations analysis

In the analysis of violations, the cases were categorized into major and minor misbehaviors. Most cases involved data fabrication and falsification, followed by cases involving non-adherence to laws and regulations, patient safety, plagiarism, and conflicts of interest. When classified by categories, 12.5% of the tagged violations involved issues in the study design, 16.4% in data collection, 56.0% in reporting, and 15.1% involved collaboration issues. Approximately 80% of the tagged violations involved serious research misbehaviors, based on the ranking of research misbehaviors proposed by Bouter and colleagues. However, as demonstrated in a meta-analysis by Fanelli (2009), most self-declared cases involve questionable research practices. In the meta-analysis, 33.7% of scientists admitted questionable research practices, and 72% admitted when asked about the behavior of colleagues. This finding contrasts with an admission rate of 1.97% and 14.12% for cases involving fabrication, falsification, and plagiarism. However, Fanelli’s meta-analysis does not include data about research misbehaviors in its wider sense but focuses on behaviors that bias research results (i.e. fabrication and falsification, intentional non-publication of results, biased methodology, misleading reporting). In our study, the majority of cases involved FFP (66.4%). Overrepresentation of some types of violations, and underrepresentation of others, might lead to misguided efforts, as cases that receive intense publicity eventually influence policies relating to scientific misconduct and RI [ 20 ].

Sanctions analysis

The five most prevalent outcomes were paper retraction, followed by exclusion from funding applications, exclusion from service or position, dismissal and suspension, and paper correction. This result is similar to that found by Redman and Merz [ 31 ], who collected data from misconduct cases provided by the ORI. Moreover, their results show that fabrication and falsification cases are 8.8 times more likely than others to receive funding exclusions. Such cases also received, on average, 0.6 more sanctions per case. Punishments for misconduct remain under discussion, ranging from the criminalization of more serious forms of misconduct [ 32 ] to social punishments, such as those recently introduced by China [ 33 ]. The most common sanction identified by our analysis—paper retraction—is consistent with the most prevalent types of violation, that is, falsification and fabrication.

Publicizing scientific misconduct

The lack of publicly available summaries of misconduct investigations makes it difficult to share experiences and evaluate the effectiveness of policies and training programs. Publicizing scientific misconduct can have serious consequences and creates a stigma around those involved in the case. For instance, publicized allegations can damage the reputation of the accused even when they are later exonerated [ 21 ]. Thus, for published cases, it is the responsibility of the authors and editors to determine whether the name(s) of those involved should be disclosed. On the one hand, it is envisaged that disclosing the name(s) of those involved will encourage others in the community to foster good standards. On the other hand, it is suggested that someone who has made a mistake should have the right to a chance to defend his/her reputation. Regardless of whether a person's name is left out or disclosed, case reports have an important educational function and can help guide RE- and RI-related policies [ 34 ]. A recent paper published by Gunsalus [ 35 ] proposes a three-part approach to strengthen transparency in misconduct investigations. The first part consists of a checklist [ 36 ]. The second suggests that an external peer reviewer should be involved in investigative reporting. The third part calls for the publication of the peer reviewer’s findings.

Limitations

One of the possible limitations of our study may be our search strategy. Although we have conducted pilot searches and sensitivity tests to reach the most feasible and precise search strategy, we cannot exclude the possibility of having missed important cases. Furthermore, the use of English keywords was another limitation of our search. Since most investigations are performed locally and published in local repositories, our search only allowed us to access cases from English-speaking countries or discussed in academic publications written in English. Additionally, it is important to note that the published cases are not representative of all instances of misconduct, since most of them are never discovered, and when discovered, not all are fully investigated or have their findings published. It is also important to note that the lack of information from the extracted case descriptions is a limitation that affects the interpretation of our results. In our review, only 25 retraction notices contained sufficient information that allowed us to include them in our analysis in conformance with the inclusion criteria. Although our search strategy was not focused specifically on retraction and misconduct notices, we believe that if sufficiently detailed information was available in such notices, the search strategy would have identified them.

Case descriptions found in academic journals are dominated by discussions regarding prominent cases and are mainly published in the news section of journals. Our results show that there is an overrepresentation of biomedical research cases over other scientific fields when compared with the volume of publications produced by each field. Moreover, published cases mostly involve fabrication, falsification, and patient safety issues. This finding could have a significant impact on the academic representation of ethical issues for RE and RI. The predominance of fabrication and falsification cases might diverge the attention of the academic community from relevant but less visible violations and ethical issues, and recently emerging forms of misbehaviors.

Availability of data and materials

This review has been developed by members of the EnTIRE project in order to generate information on the cases that will be made available on the Embassy of Good Science platform ( www.embassy.science ). The dataset supporting the conclusions of this article is available in the Open Science Framework (OSF) repository in https://osf.io/3xatj/?view_only=313a0477ab554b7489ee52d3046398b9 .

National Academies of Sciences E, Medicine. Fostering integrity in research. National Academies Press; 2017.

Davis MS, Riske-Morris M, Diaz SR. Causal factors implicated in research misconduct: evidence from ORI case files. Sci Eng Ethics. 2007;13(4):395–414. https://doi.org/10.1007/s11948-007-9045-2 .

Article   Google Scholar  

Ampollini I, Bucchi M. When public discourse mirrors academic debate: research integrity in the media. Sci Eng Ethics. 2020;26(1):451–74. https://doi.org/10.1007/s11948-019-00103-5 .

Hesselmann F, Graf V, Schmidt M, Reinhart M. The visibility of scientific misconduct: a review of the literature on retracted journal articles. Curr Sociol La Sociologie contemporaine. 2017;65(6):814–45. https://doi.org/10.1177/0011392116663807 .

Martinson BC, Anderson MS, de Vries R. Scientists behaving badly. Nature. 2005;435(7043):737–8. https://doi.org/10.1038/435737a .

Loikith L, Bauchwitz R. The essential need for research misconduct allegation audits. Sci Eng Ethics. 2016;22(4):1027–49. https://doi.org/10.1007/s11948-016-9798-6 .

OECD. Revised field of science and technology (FoS) classification in the Frascati manual. Working Party of National Experts on Science and Technology Indicators 2007. p. 1–12.

Bouter LM, Tijdink J, Axelsen N, Martinson BC, ter Riet G. Ranking major and minor research misbehaviors: results from a survey among participants of four World Conferences on Research Integrity. Res Integrity Peer Rev. 2016;1(1):17. https://doi.org/10.1186/s41073-016-0024-5 .

Greenberg DS. Resounding echoes of Gallo case. Lancet. 1995;345(8950):639.

Dresser R. Giving scientists their due. The Imanishi-Kari decision. Hastings Center Rep. 1997;27(3):26–8.

Hong ST. We should not forget lessons learned from the Woo Suk Hwang’s case of research misconduct and bioethics law violation. J Korean Med Sci. 2016;31(11):1671–2. https://doi.org/10.3346/jkms.2016.31.11.1671 .

Opel DJ, Diekema DS, Marcuse EK. Assuring research integrity in the wake of Wakefield. BMJ (Clinical research ed). 2011;342(7790):179. https://doi.org/10.1136/bmj.d2 .

Wells F. The Stoke CNEP Saga: did it need to take so long? J R Soc Med. 2010;103(9):352–6. https://doi.org/10.1258/jrsm.2010.10k010 .

Normile D. RIKEN panel finds misconduct in controversial paper. Science. 2014;344(6179):23. https://doi.org/10.1126/science.344.6179.23 .

Wager E. The Committee on Publication Ethics (COPE): Objectives and achievements 1997–2012. La Presse Médicale. 2012;41(9):861–6. https://doi.org/10.1016/j.lpm.2012.02.049 .

SCImago nd. SJR — SCImago Journal & Country Rank [Portal]. http://www.scimagojr.com . Accessed 03 Feb 2021.

Fanelli D. How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE. 2009;4(5):e5738. https://doi.org/10.1371/journal.pone.0005738 .

Steneck NH. Fostering integrity in research: definitions, current knowledge, and future directions. Sci Eng Ethics. 2006;12(1):53–74. https://doi.org/10.1007/PL00022268 .

DuBois JM, Anderson EE, Chibnall J, Carroll K, Gibb T, Ogbuka C, et al. Understanding research misconduct: a comparative analysis of 120 cases of professional wrongdoing. Account Res. 2013;20(5–6):320–38. https://doi.org/10.1080/08989621.2013.822248 .

National Academy of Sciences NAoE, Institute of Medicine Panel on Scientific R, the Conduct of R. Responsible Science: Ensuring the Integrity of the Research Process: Volume I. Washington (DC): National Academies Press (US) Copyright (c) 1992 by the National Academy of Sciences; 1992.

Bauchner H, Fontanarosa PB, Flanagin A, Thornton J. Scientific misconduct and medical journals. JAMA. 2018;320(19):1985–7. https://doi.org/10.1001/jama.2018.14350 .

COPE Council. COPE Guidelines: Retraction Guidelines. 2019. https://doi.org/10.24318/cope.2019.1.4 .

Retraction Watch. What should an ideal retraction notice look like? 2015, May 21. https://retractionwatch.com/2015/05/21/what-should-an-ideal-retraction-notice-look-like/ .

Fang FC, Steen RG, Casadevall A. Misconduct accounts for the majority of retracted scientific publications. Proc Natl Acad Sci USA. 2012;109(42):17028–33. https://doi.org/10.1073/pnas.1212247109 .

Resnik DB, Dinse GE. Scientific retractions and corrections related to misconduct findings. J Med Ethics. 2013;39(1):46–50. https://doi.org/10.1136/medethics-2012-100766 .

de Vries R, Anderson MS, Martinson BC. Normal misbehavior: scientists talk about the ethics of research. J Empir Res Hum Res Ethics JERHRE. 2006;1(1):43–50. https://doi.org/10.1525/jer.2006.1.1.43 .

Sovacool BK. Exploring scientific misconduct: isolated individuals, impure institutions, or an inevitable idiom of modern science? J Bioethical Inquiry. 2008;5(4):271. https://doi.org/10.1007/s11673-008-9113-6 .

Haven TL, Tijdink JK, Martinson BC, Bouter LM. Perceptions of research integrity climate differ between academic ranks and disciplinary fields: results from a survey among academic researchers in Amsterdam. PLoS ONE. 2019;14(1):e0210599. https://doi.org/10.1371/journal.pone.0210599 .

Trikalinos NA, Evangelou E, Ioannidis JPA. Falsified papers in high-impact journals were slow to retract and indistinguishable from nonfraudulent papers. J Clin Epidemiol. 2008;61(5):464–70. https://doi.org/10.1016/j.jclinepi.2007.11.019 .

Aubert Bonn N, Pinxten W. A decade of empirical research on research integrity: What have we (not) looked at? J Empir Res Hum Res Ethics. 2019;14(4):338–52. https://doi.org/10.1177/1556264619858534 .

Redman BK, Merz JF. Scientific misconduct: do the punishments fit the crime? Science. 2008;321(5890):775. https://doi.org/10.1126/science.1158052 .

Bülow W, Helgesson G. Criminalization of scientific misconduct. Med Health Care Philos. 2019;22(2):245–52. https://doi.org/10.1007/s11019-018-9865-7 .

Cyranoski D. China introduces “social” punishments for scientific misconduct. Nature. 2018;564(7736):312. https://doi.org/10.1038/d41586-018-07740-z .

Bird SJ. Publicizing scientific misconduct and its consequences. Sci Eng Ethics. 2004;10(3):435–6. https://doi.org/10.1007/s11948-004-0001-0 .

Gunsalus CK. Make reports of research misconduct public. Nature. 2019;570(7759):7. https://doi.org/10.1038/d41586-019-01728-z .

Gunsalus CK, Marcus AR, Oransky I. Institutional research misconduct reports need more credibility. JAMA. 2018;319(13):1315–6. https://doi.org/10.1001/jama.2018.0358 .

Download references

Acknowledgements

The authors wish to thank the EnTIRE research group. The EnTIRE project (Mapping Normative Frameworks for Ethics and Integrity of Research) aims to create an online platform that makes RE+RI information easily accessible to the research community. The EnTIRE Consortium is composed by VU Medical Center, Amsterdam, gesinn. It Gmbh & Co Kg, KU Leuven, University of Split School of Medicine, Dublin City University, Central European University, University of Oslo, University of Manchester, European Network of Research Ethics Committees.

EnTIRE project (Mapping Normative Frameworks for Ethics and Integrity of Research) has received funding from the European Union’s Horizon 2020 research and innovation program under grant agreement N 741782. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Author information

Authors and affiliations.

Department of Behavioural Sciences, Faculty of Medicine, University of Debrecen, Móricz Zsigmond krt. 22. III. Apartman Diákszálló, Debrecen, 4032, Hungary

Anna Catharina Vieira Armond & János Kristóf Bodnár

Institute of Ethics, School of Theology, Philosophy and Music, Dublin City University, Dublin, Ireland

Bert Gordijn, Jonathan Lewis & Mohammad Hosseini

Centre for Social Ethics and Policy, School of Law, University of Manchester, Manchester, UK

Center for Medical Ethics, HELSAM, Faculty of Medicine, University of Oslo, Oslo, Norway

Center for Ethics and Law in Biomedicine, Central European University, Budapest, Hungary

Péter Kakuk

You can also search for this author in PubMed   Google Scholar

Contributions

All authors (ACVA, BG, JL, MH, JKB, SH and PK) developed the idea for the article. ACVA, PK, JKB performed the literature search and data analysis, ACVA and PK produced the draft, and all authors critically revised it. All authors have read and approved the manuscript.

Corresponding author

Correspondence to Anna Catharina Vieira Armond .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

. Pilot search and search strategy.

Additional file 2

. List of Major and minor misbehavior items (Developed by Bouter LM, Tijdink J, Axelsen N, Martinson BC, ter Riet G. Ranking major and minor research misbehaviors: results from a survey among participants of four World Conferences on Research Integrity. Research integrity and peer review. 2016;1(1):17. https://doi.org/10.1186/s41073-016-0024-5 ).

Additional file 3

. Table containing the number and percentage of countries included in the analysis of articles.

Additional file 4

. Table containing the number and percentage of countries included in the analysis of the cases.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Armond, A.C.V., Gordijn, B., Lewis, J. et al. A scoping review of the literature featuring research ethics and research integrity cases. BMC Med Ethics 22 , 50 (2021). https://doi.org/10.1186/s12910-021-00620-8

Download citation

Received : 06 October 2020

Accepted : 21 April 2021

Published : 30 April 2021

DOI : https://doi.org/10.1186/s12910-021-00620-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Research ethics
  • Research integrity
  • Scientific misconduct

BMC Medical Ethics

ISSN: 1472-6939

research ethics review article

The changing face of research support

Stephen conway, executive director of research services, outlines the activities across the university to support oxford’s researchers and their research in an increasingly complex and digital world.

Head and shoulders shot of Stephen Conway

Key points:

  • Work underway to improve our support for researchers and their research in face of increased scale, complexity and funder requirements   
  • Information about our ambitions for digital research management systems and tools, and a new research ethics system and dashboard project for REF 2029 
  • Supporting Oxford’s researchers and their research through Digital Transformation and Professional Services Together 

Oxford is widely known for its world-leading research and the profound impact that the work of our outstanding researchers has across the globe. It’s a defining characteristic of the University, and core to our academic mission – and one that we’re proud to support in Research Services.  

Protecting that research excellence and helping our researchers succeed requires continued creativity and innovation in our ways of working – particularly as the scale and complexity of our research portfolio increases, we move through a digital transformation at Oxford, and we navigate the changing external environment.  

Findings from the Research Finance Management (Post Award) Service Review 

A critically important piece of recent work has been a service review of how we support the financial management of research funding. Oxford's annual research income is now over £200m more than any other UK university and this in-depth review considered ways to respond to the growing scale, complexity and diversity of our external funding.   

The review identified nine priority actions. These are focused on improving planning and coordination across the University; addressing workforce pinch points; seizing new technology opportunities; using digital tools and automation to streamline processes; and improving guidance and support resources for researchers and research support staff.  

The Service Review was part of Professional Services Together , working with colleagues across Oxford and with the expert support of the Focus team. It was informed by researchers and professional services staff, as well as benchmarking and best practice at other institutions. The full report is available on the Focus webpages, we look forward to working with you on the implementation.  

Improving digital experiences 

One of the most important pieces of work underway is focused on improving how we manage research projects across their full lifecycle. To support this, we are transforming our systems and tools for preparing and costing funding applications, managing and reporting on awards, handling research contract negotiations, and supporting research governance and assurance activities.  

We want end-to-end systems that are researcher-centred, better promote transparency of tasks and workflow between researchers, departments and central teams, improve process efficiency and allow us to reuse research management data wherever possible.  

We are progressing well understanding our requirements for upgraded systems, reviewing potential suppliers and finding out about the experience of other research-intensive UK institutions – and plan to seek input on a recommended solution in autumn 2024.  

A new research ethics application system 

As a first step in moving to this joined up digital environment, we are implementing Worktribe Ethics as a new online system to manage the preparation, submission, review and approval of research ethics applications. This will replace the current offline and email-based process. 

The University’s Department of Education and its ethics committee will shortly begin testing the new system. Subject to a successful pilot, we will test and roll the system out to other departments, divisions and ethics committees across the University over the rest of 2024. 

REF preparation: creation of dashboards 

While the timing of the next Research Excellence Framework (REF) may have slipped to 2029, we want to make sure that, as an institution, our research leaders are able to access reliable and comprehensive information on research quality and research activities to support their modelling and strategic decision-making, including in preparation for REF 2029.  

The Research Strategy and Policy Unit in Research Services is leading on the development of dashboards for use at departmental, divisional and central University levels. This includes topics such as research income data; researcher career development; and volumes, value, trend and distribution by department, research and funders.  

This is a complex project, needing us to combine data, analysed across multiple dimensions, from many different systems – Symplectic, ORA, Oracle Financials, the PeopleXD HR system, CoSy training records and more.  

The first stages involve us assessing whether data is available, and assessing its quality and how well it can work across different systems. We will then take action to address any gaps. We are hugely grateful for the support of colleagues in these areas for their continued support in this work. 

Find out more and get involved 

This is a small snapshot of activity that is underway to support the research community, which I believe will significantly improve the quality, efficiency and usability of the services we offer. 

If you’d like to get involved in individual projects, details will be shared through the Research Services (RS) newsletter and the Research and Innovation Support Network (RISN) in due course.  

For more detail of the wider research component of the Digital Transformation programme, please refer to the website. 

Thank you for your ongoing support.

Professional Services Together logo and Digital Transformation logo

BACK TO ALL NEWS

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

Ethical Issues in Research: Perceptions of Researchers, Research Ethics Board Members and Research Ethics Experts

Marie-josée drolet.

1 Department of Occupational Therapy (OT), Université du Québec à Trois-Rivières (UQTR), Trois-Rivières (Québec), Canada

Eugénie Rose-Derouin

2 Bachelor OT program, Université du Québec à Trois-Rivières (UQTR), Trois-Rivières (Québec), Canada

Julie-Claude Leblanc

Mélanie ruest, bryn williams-jones.

3 Department of Social and Preventive Medicine, School of Public Health, Université de Montréal, Montréal (Québec), Canada

In the context of academic research, a diversity of ethical issues, conditioned by the different roles of members within these institutions, arise. Previous studies on this topic addressed mainly the perceptions of researchers. However, to our knowledge, no studies have explored the transversal ethical issues from a wider spectrum, including other members of academic institutions as the research ethics board (REB) members, and the research ethics experts. The present study used a descriptive phenomenological approach to document the ethical issues experienced by a heterogeneous group of Canadian researchers, REB members, and research ethics experts. Data collection involved socio-demographic questionnaires and individual semi-structured interviews. Following the triangulation of different perspectives (researchers, REB members and ethics experts), emerging ethical issues were synthesized in ten units of meaning: (1) research integrity, (2) conflicts of interest, (3) respect for research participants, (4) lack of supervision and power imbalances, (5) individualism and performance, (6) inadequate ethical guidance, (7) social injustices, (8) distributive injustices, (9) epistemic injustices, and (10) ethical distress. This study highlighted several problematic elements that can support the identification of future solutions to resolve transversal ethical issues in research that affect the heterogeneous members of the academic community.

Introduction

Research includes a set of activities in which researchers use various structured methods to contribute to the development of knowledge, whether this knowledge is theoretical, fundamental, or applied (Drolet & Ruest, accepted ). University research is carried out in a highly competitive environment that is characterized by ever-increasing demands (i.e., on time, productivity), insufficient access to research funds, and within a market economy that values productivity and speed often to the detriment of quality or rigour – this research context creates a perfect recipe for breaches in research ethics, like research misbehaviour or misconduct (i.e., conduct that is ethically questionable or unacceptable because it contravenes the accepted norms of responsible conduct of research or compromises the respect of core ethical values that are widely held by the research community) (Drolet & Girard, 2020 ; Sieber, 2004 ). Problematic ethics and integrity issues – e.g., conflicts of interest, falsification of data, non-respect of participants’ rights, and plagiarism, to name but a few – have the potential to both undermine the credibility of research and lead to negative consequences for many stakeholders, including researchers, research assistants and personnel, research participants, academic institutions, and society as a whole (Drolet & Girard, 2020 ). It is thus evident that the academic community should be able to identify these different ethical issues in order to evaluate the nature of the risks that they pose (and for whom), and then work towards their prevention or management (i.e., education, enhanced policies and procedures, risk mitigation strategies).

In this article, we define an “ethical issue” as any situation that may compromise, in whole or in part, the respect of at least one moral value (Swisher et al., 2005 ) that is considered socially legitimate and should thus be respected. In general, ethical issues occur at three key moments or stages of the research process: (1) research design (i.e., conception, project planning), (2) research conduct (i.e., data collection, data analysis) and (3) knowledge translation or communication (e.g., publications of results, conferences, press releases) (Drolet & Ruest, accepted ). According to Sieber ( 2004 ), ethical issues in research can be classified into five categories, related to: (a) communication with participants and the community, (b) acquisition and use of research data, (c) external influence on research, (d) risks and benefits of the research, and (e) selection and use of research theories and methods. Many of these issues are related to breaches of research ethics norms, misbehaviour or research misconduct. Bruhn et al., ( 2002 ) developed a typology of misbehaviour and misconduct in academia that can be used to judge the seriousness of different cases. This typology takes into consideration two axes of reflection: (a) the origin of the situation (i.e., is it the researcher’s own fault or due to the organizational context?), and (b) the scope and severity (i.e., is this the first instance or a recurrent behaviour? What is the nature of the situation? What are the consequences, for whom, for how many people, and for which organizations?).

A previous detailed review of the international literature on ethical issues in research revealed several interesting findings (Beauchemin et al., 2021 ). Indeed, the current literature is dominated by descriptive ethics, i.e., the sharing by researchers from various disciplines of the ethical issues they have personally experienced. While such anecdotal documentation is relevant, it is insufficient because it does not provide a global view of the situation. Among the reviewed literature, empirical studies were in the minority (Table  1 ) – only about one fifth of the sample (n = 19) presented empirical research findings on ethical issues in research. The first of these studies was conducted almost 50 years ago (Hunt et al., 1984 ), with the remainder conducted in the 1990s. Eight studies were conducted in the United States (n = 8), five in Canada (n = 5), three in England (n = 3), two in Sweden (n = 2) and one in Ghana (n = 1).

Summary of Empirical Studies on Ethical Issues in Research by the year of publication

Further, the majority of studies in our sample (n = 12) collected the perceptions of a homogeneous group of participants, usually researchers (n = 14) and sometimes health professionals (n = 6). A minority of studies (n = 7) triangulated the perceptions of diverse research stakeholders (i.e., researchers and research participants, or students). To our knowledge, only one study has examined perceptions of ethical issues in research by research ethics board members (REB; Institutional Review Boards [IRB] in the USA), and none to date have documented the perceptions of research ethics experts. Finally, nine studies (n = 9) adopted a qualitative design, seven studies (n = 7) a quantitative design, and three (n = 3) a mixed-methods design.

More studies using empirical research methods are needed to better identify broader trends, to enrich discussions on the values that should govern responsible conduct of research in the academic community, and to evaluate the means by which these values can be supported in practice (Bahn, 2012 ; Beauchemin et al., 2021 ; Bruhn et al., 2002 ; Henderson et al., 2013 ; Resnik & Elliot, 2016; Sieber 2004 ). To this end, we conducted an empirical qualitative study to document the perceptions and experiences of a heterogeneous group of Canadian researchers, REB members, and research ethics experts, to answer the following broad question: What are the ethical issues in research?

Research Methods

Research design.

A qualitative research approach involving individual semi-structured interviews was used to systematically document ethical issues (De Poy & Gitlin, 2010 ; Hammell et al., 2000 ). Specifically, a descriptive phenomenological approach inspired by the philosophy of Husserl was used (Husserl, 1970 , 1999 ), as it is recommended for documenting the perceptions of ethical issues raised by various practices (Hunt & Carnavale, 2011 ).

Ethical considerations

The principal investigator obtained ethics approval for this project from the Research Ethics Board of the Université du Québec à Trois-Rivières (UQTR). All members of the research team signed a confidentiality agreement, and research participants signed the consent form after reading an information letter explaining the nature of the research project.

Sampling and recruitment

As indicated above, three types of participants were sought: (1) researchers from different academic disciplines conducting research (i.e., theoretical, fundamental or empirical) in Canadian universities; (2) REB members working in Canadian organizations responsible for the ethical review, oversight or regulation of research; and (3) research ethics experts, i.e., academics or ethicists who teach research ethics, conduct research in research ethics, or are scholars who have acquired a specialization in research ethics. To be included in the study, participants had to work in Canada, speak and understand English or French, and be willing to participate in the study. Following Thomas and Polio’s (2002) recommendation to recruit between six and twelve participants (for a homogeneous sample) to ensure data saturation, for our heterogeneous sample, we aimed to recruit approximately twelve participants in order to obtain data saturation. Having used this method several times in related projects in professional ethics, data saturation is usually achieved with 10 to 15 participants (Drolet & Goulet, 2018 ; Drolet & Girard, 2020 ; Drolet et al., 2020 ). From experience, larger samples only serve to increase the degree of data saturation, especially in heterogeneous samples (Drolet et al., 2017 , 2019 ; Drolet & Maclure, 2016 ).

Purposive sampling facilitated the identification of participants relevant to documenting the phenomenon in question (Fortin, 2010 ). To ensure a rich and most complete representation of perceptions, we sought participants with varied and complementary characteristics with regards to the social roles they occupy in research practice (Drolet & Girard, 2020 ). A triangulation of sources was used for the recruitment (Bogdan & Biklen, 2006 ). The websites of Canadian universities and Canadian health institution REBs, as well as those of major Canadian granting agencies (i.e., the Canadian Institutes of Health Research, the Natural Sciences and Engineering Research Council of Canada, and the Social Sciences and Humanities Research Council of Canada, Fonds de recherche du Quebec), were searched to identify individuals who might be interested in participating in the study. Further, people known by the research team for their knowledge and sensitivity to ethical issues in research were asked to participate. Research participants were also asked to suggest other individuals who met the study criteria.

Data Collection

Two tools were used for data collecton: (a) a socio-demographic questionnaire, and (b) a semi-structured individual interview guide. English and French versions of these two documents were used and made available, depending on participant preferences. In addition, although the interview guide contained the same questions, they were adapted to participants’ specific roles (i.e., researcher, REB member, research ethics expert). When contacted by email by the research assistant, participants were asked to confirm under which role they wished to participate (because some participants might have multiple, overlapping responsibilities) and they were sent the appropriate interview guide.

The interview guides each had two parts: an introduction and a section on ethical issues. The introduction consisted of general questions to put the participant at ease (i.e., “Tell me what a typical day at work is like for you”). The section on ethical issues was designed to capture the participant’s perceptions through questions such as: “Tell me three stories you have experienced at work that involve an ethical issue?” and “Do you feel that your organization is doing enough to address, manage, and resolve ethical issues in your work?”. Although some interviews were conducted in person, the majority were conducted by videoconference to promote accessibility and because of the COVID-19 pandemic. Interviews were digitally recorded so that the verbatim could be transcribed in full, and varied between 40 and 120 min in duration, with an average of 90 min. Research assistants conducted the interviews and transcribed the verbatim.

Data Analysis

The socio-demographic questionnaires were subjected to simple descriptive statistical analyses (i.e., means and totals), and the semi-structured interviews were subjected to qualitative analysis. The steps proposed by Giorgi ( 1997 ) for a Husserlian phenomenological reduction of the data were used. After collecting, recording, and transcribing the interviews, all verbatim were analyzed by at least two analysts: a research assistant (2nd author of this article) and the principal investigator (1st author) or a postdoctoral fellow (3rd author). The repeated reading of the verbatim allowed the first analyst to write a synopsis, i.e., an initial extraction of units of meaning. The second analyst then read the synopses, which were commented and improved if necessary. Agreement between analysts allowed the final drafting of the interview synopses, which were then analyzed by three analysts to generate and organize the units of meaning that emerged from the qualitative data.

Participants

Sixteen individuals (n = 16) participated in the study, of whom nine (9) identified as female and seven (7) as male (Table  2 ). Participants ranged in age from 22 to 72 years, with a mean age of 47.5 years. Participants had between one (1) and 26 years of experience in the research setting, with an average of 14.3 years of experience. Participants held a variety of roles, including: REB members (n = 11), researchers (n = 10), research ethics experts (n = 4), and research assistant (n = 1). As mentioned previously, seven (7) participants held more than one role, i.e., REB member, research ethics expert, and researcher. The majority (87.5%) of participants were working in Quebec, with the remaining working in other Canadian provinces. Although all participants considered themselves to be francophone, one quarter (n = 4) identified themselves as belonging to a cultural minority group.

Description of Participants

With respect to their academic background, most participants (n = 9) had a PhD, three (3) had a post-doctorate, two (2) had a master’s degree, and two (2) had a bachelor’s degree. Participants came from a variety of disciplines: nine (9) had a specialty in the humanities or social sciences, four (4) in the health sciences and three (3) in the natural sciences. In terms of their knowledge of ethics, five (5) participants reported having taken one university course entirely dedicated to ethics, four (4) reported having taken several university courses entirely dedicated to ethics, three (3) had a university degree dedicated to ethics, while two (2) only had a few hours or days of training in ethics and two (2) reported having no knowledge of ethics.

Ethical issues

As Fig.  1 illustrates, ten units of meaning emerge from the data analysis, namely: (1) research integrity, (2) conflicts of interest, (3) respect for research participants, (4) lack of supervision and power imbalances, (5) individualism and performance, (6) inadequate ethical guidance, (7) social injustices, (8) distributive injustices, (9) epistemic injustices, and (10) ethical distress. To illustrate the results, excerpts from verbatim interviews are presented in the following sub-sections. Most of the excerpts have been translated into English as the majority of interviews were conducted with French-speaking participants.

An external file that holds a picture, illustration, etc.
Object name is 10805_2022_9455_Fig1_HTML.jpg

Ethical issues in research according to the participants

Research Integrity

The research environment is highly competitive and performance-based. Several participants, in particular researchers and research ethics experts, felt that this environment can lead both researchers and research teams to engage in unethical behaviour that reflects a lack of research integrity. For example, as some participants indicated, competition for grants and scientific publications is sometimes so intense that researchers falsify research results or plagiarize from colleagues to achieve their goals.

Some people will lie or exaggerate their research findings in order to get funding. Then, you see it afterwards, you realize: “ah well, it didn’t work, but they exaggerated what they found and what they did” (participant 14). Another problem in research is the identification of authors when there is a publication. Very often, there are authors who don’t even know what the publication is about and that their name is on it. (…) The time that it surprised me the most was just a few months ago when I saw someone I knew who applied for a teaching position. He got it I was super happy for him. Then I looked at his publications and … there was one that caught my attention much more than the others, because I was in it and I didn’t know what that publication was. I was the second author of a publication that I had never read (participant 14). I saw a colleague who had plagiarized another colleague. [When the colleague] found out about it, he complained. So, plagiarism is a serious [ethical breach]. I would also say that there is a certain amount of competition in the university faculties, especially for grants (…). There are people who want to win at all costs or get as much as possible. They are not necessarily going to consider their colleagues. They don’t have much of a collegial spirit (participant 10).

These examples of research misbehaviour or misconduct are sometimes due to or associated with situations of conflicts of interest, which may be poorly managed by certain researchers or research teams, as noted by many participants.

Conflict of interest

The actors and institutions involved in research have diverse interests, like all humans and institutions. As noted in Chap. 7 of the Canadian Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans (TCPS2, 2018),

“researchers and research students hold trust relationships, either directly or indirectly, with participants, research sponsors, institutions, their professional bodies and society. These trust relationships can be put at risk by conflicts of interest that may compromise independence, objectivity or ethical duties of loyalty. Although the potential for such conflicts has always existed, pressures on researchers (i.e., to delay or withhold dissemination of research outcomes or to use inappropriate recruitment strategies) heighten concerns that conflicts of interest may affect ethical behaviour” (p. 92).

The sources of these conflicts are varied and can include interpersonal conflicts, financial partnerships, third-party pressures, academic or economic interests, a researcher holding multiple roles within an institution, or any other incentive that may compromise a researcher’s independence, integrity, and neutrality (TCPS2, 2018). While it is not possible to eliminate all conflicts of interest, it is important to manage them properly and to avoid temptations to behave unethically.

Ethical temptations correspond to situations in which people are tempted to prioritize their own interests to the detriment of the ethical goods that should, in their own context, govern their actions (Swisher et al., 2005 ). In the case of researchers, this refers to situations that undermine independence, integrity, neutrality, or even the set of principles that govern research ethics (TCPS2, 2018) or the responsible conduct of research. According to study participants, these types of ethical issues frequently occur in research. Many participants, especially researchers and REB members, reported that conflicts of interest can arise when members of an organization make decisions to obtain large financial rewards or to increase their academic profile, often at the expense of the interests of members of their research team, research participants, or even the populations affected by their research.

A company that puts money into making its drug work wants its drug to work. So, homeopathy is a good example, because there are not really any consequences of homeopathy, there are not very many side effects, because there are no effects at all. So, it’s not dangerous, but it’s not a good treatment either. But some people will want to make it work. And that’s a big issue when you’re sitting at a table and there are eight researchers, and there are two or three who are like that, and then there are four others who are neutral, and I say to myself, this is not science. I think that this is a very big ethical issue (participant 14). There are also times in some research where there will be more links with pharmaceutical companies. Obviously, there are then large amounts of money that will be very interesting for the health-care institutions because they still receive money for clinical trials. They’re still getting some compensation because its time consuming for the people involved and all that. The pharmaceutical companies have money, so they will compensate, and that is sometimes interesting for the institutions, and since we are a bit caught up in this, in the sense that we have no choice but to accept it. (…) It may not be the best research in the world, there may be a lot of side effects due to the drugs, but it’s good to accept it, we’re going to be part of the clinical trial (participant 3). It is integrity, what we believe should be done or said. Often by the pressure of the environment, integrity is in tension with the pressures of the environment, so it takes resistance, it takes courage in research. (…) There were all the debates there about the problems of research that was funded and then the companies kept control over what was written. That was really troubling for a lot of researchers (participant 5).

Further, these situations sometimes have negative consequences for research participants as reported by some participants.

Respect for research participants

Many research projects, whether they are psychosocial or biomedical in nature, involve human participants. Relationships between the members of research teams and their research participants raise ethical issues that can be complex. Research projects must always be designed to respect the rights and interests of research participants, and not just those of researchers. However, participants in our study – i.e., REB members, researchers, and research ethics experts – noted that some research teams seem to put their own interests ahead of those of research participants. They also emphasized the importance of ensuring the respect, well-being, and safety of research participants. The ethical issues related to this unit of meaning are: respect for free, informed and ongoing consent of research participants; respect for and the well-being of participants; data protection and confidentiality; over-solicitation of participants; ownership of the data collected on participants; the sometimes high cost of scientific innovations and their accessibility; balance between the social benefits of research and the risks to participants (particularly in terms of safety); balance between collective well-being (development of knowledge) and the individual rights of participants; exploitation of participants; paternalism when working with populations in vulnerable situations; and the social acceptability of certain types of research. The following excerpts present some of these issues.

Where it disturbs me ethically is in the medical field – because it’s more in the medical field that we’re going to see this – when consent forms are presented to patients to solicit them as participants, and then [these forms] have an average of 40 pages. That annoys me. When they say that it has to be easy to understand and all that, adapted to the language, and then the hyper-technical language plus there are 40 pages to read, I don’t understand how you’re going to get informed consent after reading 40 pages. (…) For me, it doesn’t work. I read them to evaluate them and I have a certain level of education and experience in ethics, and there are times when I don’t understand anything (participant 2). There is a lot of pressure from researchers who want to recruit research participants (…). The idea that when you enter a health care institution, you become a potential research participant, when you say “yes to a research, you check yes to all research”, then everyone can ask you. I think that researchers really have this fantasy of saying to themselves: “as soon as people walk through the door of our institution, they become potential participants with whom we can communicate and get them involved in all projects”. There’s a kind of idea that, yes, it can be done, but it has to be somewhat supervised to avoid over-solicitation (…). Researchers are very interested in facilitating recruitment and making it more fluid, but perhaps to the detriment of confidentiality, privacy, and respect; sometimes that’s what it is, to think about what type of data you’re going to have in your bank of potential participants? Is it just name and phone number or are you getting into more sensitive information? (participant 9).

In addition, one participant reported that their university does not provide the resources required to respect the confidentiality of research participants.

The issue is as follows: researchers, of course, commit to protecting data with passwords and all that, but we realize that in practice, it is more difficult. It is not always as protected as one might think, because professor-researchers will run out of space. Will the universities make rooms available to researchers, places where they can store these things, especially when they have paper documentation, and is there indeed a guarantee of confidentiality? Some researchers have told me: “Listen; there are even filing cabinets in the corridors”. So, that certainly poses a concrete challenge. How do we go about challenging the administrative authorities? Tell them it’s all very well to have an ethics committee, but you have to help us, you also have to make sure that the necessary infrastructures are in place so that what we are proposing is really put into practice (participant 4).

If the relationships with research participants are likely to raise ethical issues, so too are the relationships with students, notably research assistants. On this topic, several participants discussed the lack of supervision or recognition offered to research assistants by researchers as well as the power imbalances between members of the research team.

Lack of Supervision and Power Imbalances

Many research teams are composed not only of researchers, but also of students who work as research assistants. The relationship between research assistants and other members of research teams can sometimes be problematic and raise ethical issues, particularly because of the inevitable power asymmetries. In the context of this study, several participants – including a research assistant, REB members, and researchers – discussed the lack of supervision or recognition of the work carried out by students, psychological pressure, and the more or less well-founded promises that are sometimes made to students. Participants also mentioned the exploitation of students by certain research teams, which manifest when students are inadequately paid, i.e., not reflective of the number of hours actually worked, not a fair wage, or even a wage at all.

[As a research assistant], it was more of a feeling of distress that I felt then because I didn’t know what to do. (…) I was supposed to get coaching or be supported, but I didn’t get anything in the end. It was like, “fix it by yourself”. (…) All research assistants were supposed to be supervised, but in practice they were not (participant 1). Very often, we have a master’s or doctoral student that we put on a subject and we consider that the project will be well done, while the student is learning. So, it happens that the student will do a lot of work and then we realize that the work is poorly done, and it is not necessarily the student’s fault. He wasn’t necessarily well supervised. There are directors who have 25 students, and they just don’t supervise them (participant 14). I think it’s really the power relationship. I thought to myself, how I saw my doctorate, the beginning of my research career, I really wanted to be in that laboratory, but they are the ones who are going to accept me or not, so what do I do to be accepted? I finally accept their conditions [which was to work for free]. If these are the conditions that are required to enter this lab, I want to go there. So, what do I do, well I accepted. It doesn’t make sense, but I tell myself that I’m still privileged, because I don’t have so many financial worries, one more reason to work for free, even though it doesn’t make sense (participant 1). In research, we have research assistants. (…). The fact of using people… so that’s it, you have to take into account where they are, respect them, but at the same time they have to show that they are there for the research. In English, we say “carry” or take care of people. With research assistants, this is often a problem that I have observed: for grant machines, the person is the last to be found there. Researchers, who will take, use student data, without giving them the recognition for it (participant 5). The problem at our university is that they reserve funding for Canadian students. The doctoral clientele in my field is mostly foreign students. So, our students are poorly funded. I saw one student end up in the shelter, in a situation of poverty. It ended very badly for him because he lacked financial resources. Once you get into that dynamic, it’s very hard to get out. I was made aware of it because the director at the time had taken him under her wing and wanted to try to find a way to get him out of it. So, most of my students didn’t get funded (participant 16). There I wrote “manipulation”, but it’s kind of all promises all the time. I, for example, was promised a lot of advancement, like when I got into the lab as a graduate student, it was said that I had an interest in [this particular area of research]. I think there are a lot of graduate students who must have gone through that, but it is like, “Well, your CV has to be really good, if you want to do a lot of things and big things. If you do this, if you do this research contract, the next year you could be the coordinator of this part of the lab and supervise this person, get more contracts, be paid more. Let’s say: you’ll be invited to go to this conference, this big event”. They were always dangling something, but you have to do that first to get there. But now, when you’ve done that, you have to do this business. It’s like a bit of manipulation, I think. That was very hard to know who is telling the truth and who is not (participant 1).

These ethical issues have significant negative consequences for students. Indeed, they sometimes find themselves at the mercy of researchers, for whom they work, struggling to be recognized and included as authors of an article, for example, or to receive the salary that they are due. For their part, researchers also sometimes find themselves trapped in research structures that can negatively affect their well-being. As many participants reported, researchers work in organizations that set very high productivity standards and in highly competitive contexts, all within a general culture characterized by individualism.

Individualism and performance

Participants, especially researchers, discussed the culture of individualism and performance that characterizes the academic environment. In glorifying excellence, some universities value performance and productivity, often at the expense of psychological well-being and work-life balance (i.e., work overload and burnout). Participants noted that there are ethical silences in their organizations on this issue, and that the culture of individualism and performance is not challenged for fear of retribution or simply to survive, i.e., to perform as expected. Participants felt that this culture can have a significant negative impact on the quality of the research conducted, as research teams try to maximize the quantity of their work (instead of quality) in a highly competitive context, which is then exacerbated by a lack of resources and support, and where everything must be done too quickly.

The work-life balance with the professional ethics related to work in a context where you have too much and you have to do a lot, it is difficult to balance all that and there is a lot of pressure to perform. If you don’t produce enough, that’s it; after that, you can’t get any more funds, so that puts pressure on you to do more and more and more (participant 3). There is a culture, I don’t know where it comes from, and that is extremely bureaucratic. If you dare to raise something, you’re going to have many, many problems. They’re going to make you understand it. So, I don’t talk. It is better: your life will be easier. I think there are times when you have to talk (…) because there are going to be irreparable consequences. (…) I’m not talking about a climate of terror, because that’s exaggerated, it’s not true, people are not afraid. But people close their office door and say nothing because it’s going to make their work impossible and they’re not going to lose their job, they’re not going to lose money, but researchers need time to be focused, so they close their office door and say nothing (participant 16).

Researchers must produce more and more, and they feel little support in terms of how to do such production, ethically, and how much exactly they are expected to produce. As this participant reports, the expectation is an unspoken rule: more is always better.

It’s sometimes the lack of a clear line on what the expectations are as a researcher, like, “ah, we don’t have any specific expectations, but produce, produce, produce, produce.” So, in that context, it’s hard to be able to put the line precisely: “have I done enough for my work?” (participant 3).

Inadequate ethical Guidance

While the productivity expectation is not clear, some participants – including researchers, research ethics experts, and REB members – also felt that the ethical expectations of some REBs were unclear. The issue of the inadequate ethical guidance of research includes the administrative mechanisms to ensure that research projects respect the principles of research ethics. According to those participants, the forms required for both researchers and REB members are increasingly long and numerous, and one participant noted that the standards to be met are sometimes outdated and disconnected from the reality of the field. Multicentre ethics review (by several REBs) was also critiqued by a participant as an inefficient method that encumbers the processes for reviewing research projects. Bureaucratization imposes an ever-increasing number of forms and ethics guidelines that actually hinder researchers’ ethical reflection on the issues at stake, leading the ethics review process to be perceived as purely bureaucratic in nature.

The ethical dimension and the ethical review of projects have become increasingly bureaucratized. (…) When I first started working (…) it was less bureaucratic, less strict then. I would say [there are now] tons of forms to fill out. Of course, we can’t do without it, it’s one of the ways of marking out ethics and ensuring that there are ethical considerations in research, but I wonder if it hasn’t become too bureaucratized, so that it’s become a kind of technical reflex to fill out these forms, and I don’t know if people really do ethical reflection as such anymore (participant 10). The fundamental structural issue, I would say, is the mismatch between the normative requirements and the real risks posed by the research, i.e., we have many, many requirements to meet; we have very long forms to fill out but the research projects we evaluate often pose few risks (participant 8). People [in vulnerable situations] were previously unable to participate because of overly strict research ethics rules that were to protect them, but in the end [these rules] did not protect them. There was a perverse effect, because in the end there was very little research done with these people and that’s why we have very few results, very little evidence [to support practices with these populations] so it didn’t improve the quality of services. (…) We all understand that we have to be careful with that, but when the research is not too risky, we say to ourselves that it would be good because for once a researcher who is interested in that population, because it is not a very popular population, it would be interesting to have results, but often we are blocked by the norms, and then we can’t accept [the project] (participant 2).

Moreover, as one participant noted, accessing ethics training can be a challenge.

There is no course on research ethics. […] Then, I find that it’s boring because you go through university and you come to do your research and you know how to do quantitative and qualitative research, but all the research ethics, where do you get this? I don’t really know (participant 13).

Yet, such training could provide relevant tools to resolve, to some extent, the ethical issues that commonly arise in research. That said, and as noted by many participants, many ethical issues in research are related to social injustices over which research actors have little influence.

Social Injustices

For many participants, notably researchers, the issues that concern social injustices are those related to power asymmetries, stigma, or issues of equity, diversity, and inclusion, i.e., social injustices related to people’s identities (Blais & Drolet, 2022 ). Participants reported experiencing or witnessing discrimination from peers, administration, or lab managers. Such oppression is sometimes cross-sectional and related to a person’s age, cultural background, gender or social status.

I have my African colleague who was quite successful when he arrived but had a backlash from colleagues in the department. I think it’s unconscious, nobody is overtly racist. But I have a young person right now who is the same, who has the same success, who got exactly the same early career award and I don’t see the same backlash. He’s just as happy with what he’s doing. It’s normal, they’re young and they have a lot of success starting out. So, I think there is discrimination. Is it because he is African? Is it because he is black? I think it’s on a subconscious level (participant 16).

Social injustices were experienced or reported by many participants, and included issues related to difficulties in obtaining grants or disseminating research results in one’s native language (i.e., even when there is official bilingualism) or being considered credible and fundable in research when one researcher is a woman.

If you do international research, there are things you can’t talk about (…). It is really a barrier to research to not be able to (…) address this question [i.e. the question of inequalities between men and women]. Women’s inequality is going to be addressed [but not within the country where the research takes place as if this inequality exists elsewhere but not here]. There are a lot of women working on inequality issues, doing work and it’s funny because I was talking to a young woman who works at Cairo University and she said to me: “Listen, I saw what you had written, you’re right. I’m willing to work on this but guarantee me a position at your university with a ticket to go”. So yes, there are still many barriers [for women in research] (participant 16).

Because of the varied contextual characteristics that intervene in their occurrence, these social injustices are also related to distributive injustices, as discussed by many participants.

Distributive Injustices

Although there are several views of distributive justice, a classical definition such as that of Aristotle ( 2012 ), describes distributive justice as consisting in distributing honours, wealth, and other social resources or benefits among the members of a community in proportion to their alleged merit. Justice, then, is about determining an equitable distribution of common goods. Contemporary theories of distributive justice are numerous and varied. Indeed, many authors (e.g., Fraser 2011 ; Mills, 2017 ; Sen, 2011 ; Young, 2011 ) have, since Rawls ( 1971 ), proposed different visions of how social burdens and benefits should be shared within a community to ensure equal respect, fairness, and distribution. In our study, what emerges from participants’ narratives is a definite concern for this type of justice. Women researchers, francophone researchers, early career researchers or researchers belonging to racialized groups all discussed inequities in the distribution of research grants and awards, and the extra work they need to do to somehow prove their worth. These inequities are related to how granting agencies determine which projects will be funded.

These situations make me work 2–3 times harder to prove myself and to show people in power that I have a place as a woman in research (participant 12). Number one: it’s conservative thinking. The older ones control what comes in. So, the younger people have to adapt or they don’t get funded (participant 14).

Whether it is discrimination against stigmatized or marginalized populations or interest in certain hot topics, granting agencies judge research projects according to criteria that are sometimes questionable, according to those participants. Faced with difficulties in obtaining funding for their projects, several strategies – some of which are unethical – are used by researchers in order to cope with these situations.

Sometimes there are subjects that everyone goes to, such as nanotechnology (…), artificial intelligence or (…) the therapeutic use of cannabis, which are very fashionable, and this is sometimes to the detriment of other research that is just as relevant, but which is (…), less sexy, less in the spirit of the time. (…) Sometimes this can lead to inequities in the funding of certain research sectors (participant 9). When we use our funds, we get them given to us, we pretty much say what we think we’re going to do with them, but things change… So, when these things change, sometimes it’s an ethical decision, but by force of circumstances I’m obliged to change the project a little bit (…). Is it ethical to make these changes or should I just let the money go because I couldn’t use it the way I said I would? (participant 3).

Moreover, these distributional injustices are not only linked to social injustices, but also epistemic injustices. Indeed, the way in which research honours and grants are distributed within the academic community depends on the epistemic authority of the researchers, which seems to vary notably according to their language of use, their age or their gender, but also to the research design used (inductive versus deductive), their decision to use (or not use) animals in research, or to conduct activist research.

Epistemic injustices

The philosopher Fricker ( 2007 ) conceptualized the notions of epistemic justice and injustice. Epistemic injustice refers to a form of social inequality that manifests itself in the access, recognition, and production of knowledge as well as the various forms of ignorance that arise (Godrie & Dos Santos, 2017 ). Addressing epistemic injustice necessitates acknowledging the iniquitous wrongs suffered by certain groups of socially stigmatized individuals who have been excluded from knowledge, thus limiting their abilities to interpret, understand, or be heard and account for their experiences. In this study, epistemic injustices were experienced or reported by some participants, notably those related to difficulties in obtaining grants or disseminating research results in one’s native language (i.e., even when there is official bilingualism) or being considered credible and fundable in research when a researcher is a woman or an early career researcher.

I have never sent a grant application to the federal government in English. I have always done it in French, even though I know that when you receive the review, you can see that reviewers didn’t understand anything because they are English-speaking. I didn’t want to get in the boat. It’s not my job to translate, because let’s be honest, I’m not as good in English as I am in French. So, I do them in my first language, which is the language I’m most used to. Then, technically at the administrative level, they are supposed to be able to do it, but they are not good in French. (…) Then, it’s a very big Canadian ethical issue, because basically there are technically two official languages, but Canada is not a bilingual country, it’s a country with two languages, either one or the other. (…) So I was not funded (participant 14).

Researchers who use inductive (or qualitative) methods observed that their projects are sometimes less well reviewed or understood, while research that adopts a hypothetical-deductive (or quantitative) or mixed methods design is better perceived, considered more credible and therefore more easily funded. Of course, regardless of whether a research project adopts an inductive, deductive or mixed-methods scientific design, or whether it deals with qualitative or quantitative data, it must respect a set of scientific criteria. A research project should achieve its objectives by using proven methods that, in the case of inductive research, are credible, reliable, and transferable or, in the case of deductive research, generalizable, objective, representative, and valid (Drolet & Ruest, accepted ). Participants discussing these issues noted that researchers who adopt a qualitative design or those who question the relevance of animal experimentation or are not militant have sometimes been unfairly devalued in their epistemic authority.

There is a mini war between quantitative versus qualitative methods, which I think is silly because science is a method. If you apply the method well, it doesn’t matter what the field is, it’s done well and it’s perfect ” (participant 14). There is also the issue of the place of animals in our lives, because for me, ethics is human ethics, but also animal ethics. Then, there is a great evolution in society on the role of the animal… with the new law that came out in Quebec on the fact that animals are sensitive beings. Then, with the rise of the vegan movement, [we must ask ourselves]: “Do animals still have a place in research?” That’s a big question and it also means that there are practices that need to evolve, but sometimes there’s a disconnection between what’s expected by research ethics boards versus what’s expected in the field (participant 15). In research today, we have more and more research that is militant from an ideological point of view. And so, we have researchers, because they defend values that seem important to them, we’ll talk for example about the fight for equality and social justice. They have pressure to defend a form of moral truth and have the impression that everyone thinks like them or should do so, because they are defending a moral truth. This is something that we see more and more, namely the lack of distance between ideology and science (participant 8).

The combination or intersectionality of these inequities, which seems to be characterized by a lack of ethical support and guidance, is experienced in the highly competitive and individualistic context of research; it provides therefore the perfect recipe for researchers to experience ethical distress.

Ethical distress

The concept of “ethical distress” refers to situations in which people know what they should do to act ethically, but encounter barriers, generally of an organizational or systemic nature, limiting their power to act according to their moral or ethical values (Drolet & Ruest, 2021 ; Jameton, 1984 ; Swisher et al., 2005 ). People then run the risk of finding themselves in a situation where they do not act as their ethical conscience dictates, which in the long term has the potential for exhaustion and distress. The examples reported by participants in this study point to the fact that researchers in particular may be experiencing significant ethical distress. This distress takes place in a context of extreme competition, constant injunctions to perform, and where administrative demands are increasingly numerous and complex to complete, while paradoxically, they lack the time to accomplish all their tasks and responsibilities. Added to these demands are a lack of resources (human, ethical, and financial), a lack of support and recognition, and interpersonal conflicts.

We are in an environment, an elite one, you are part of it, you know what it is: “publish or perish” is the motto. Grants, there is a high level of performance required, to do a lot, to publish, to supervise students, to supervise them well, so yes, it is clear that we are in an environment that is conducive to distress. (…). Overwork, definitely, can lead to distress and eventually to exhaustion. When you know that you should take the time to read the projects before sharing them, but you don’t have the time to do that because you have eight that came in the same day, and then you have others waiting… Then someone rings a bell and says: “ah but there, the protocol is a bit incomplete”. Oh yes, look at that, you’re right. You make up for it, but at the same time it’s a bit because we’re in a hurry, we don’t necessarily have the resources or are able to take the time to do things well from the start, we have to make up for it later. So yes, it can cause distress (participant 9). My organization wanted me to apply in English, and I said no, and everyone in the administration wanted me to apply in English, and I always said no. Some people said: “Listen, I give you the choice”, then some people said: “Listen, I agree with you, but if you’re not [submitting] in English, you won’t be funded”. Then the fact that I am young too, because very often they will look at the CV, they will not look at the project: “ah, his CV is not impressive, we will not finance him”. This is complete nonsense. The person is capable of doing the project, the project is fabulous: we fund the project. So, that happened, organizational barriers: that happened a lot. I was not eligible for Quebec research funds (…). I had big organizational barriers unfortunately (participant 14). At the time of my promotion, some colleagues were not happy with the type of research I was conducting. I learned – you learn this over time when you become friends with people after you enter the university – that someone was against me. He had another candidate in mind, and he was angry about the selection. I was under pressure for the first three years until my contract was renewed. I almost quit at one point, but another colleague told me, “No, stay, nothing will happen”. Nothing happened, but these issues kept me awake at night (participant 16).

This difficult context for many researchers affects not only the conduct of their own research, but also their participation in research. We faced this problem in our study, despite the use of multiple recruitment methods, including more than 200 emails – of which 191 were individual solicitations – sent to potential participants by the two research assistants. REB members and organizations overseeing or supporting research (n = 17) were also approached to see if some of their employees would consider participating. While it was relatively easy to recruit REB members and research ethics experts, our team received a high number of non-responses to emails (n = 175) and some refusals (n = 5), especially by researchers. The reasons given by those who replied were threefold: (a) fear of being easily identified should they take part in the research, (b) being overloaded and lacking time, and (c) the intrusive aspect of certain questions (i.e., “Have you experienced a burnout episode? If so, have you been followed up medically or psychologically?”). In light of these difficulties and concerns, some questions in the socio-demographic questionnaire were removed or modified. Talking about burnout in research remains a taboo for many researchers, which paradoxically can only contribute to the unresolved problem of unhealthy research environments.

Returning to the research question and objective

The question that prompted this research was: What are the ethical issues in research? The purpose of the study was to describe these issues from the perspective of researchers (from different disciplines), research ethics board (REB) members, and research ethics experts. The previous section provided a detailed portrait of the ethical issues experienced by different research stakeholders: these issues are numerous, diverse and were recounted by a range of stakeholders.

The results of the study are generally consistent with the literature. For example, as in our study, the literature discusses the lack of research integrity on the part of some researchers (Al-Hidabi et al., 2018 ; Swazey et al., 1993 ), the numerous conflicts of interest experienced in research (Williams-Jones et al., 2013 ), the issues of recruiting and obtaining the free and informed consent of research participants (Provencher et al., 2014 ; Keogh & Daly, 2009 ), the sometimes difficult relations between researchers and REBs (Drolet & Girard, 2020 ), the epistemological issues experienced in research (Drolet & Ruest, accepted; Sieber 2004 ), as well as the harmful academic context in which researchers evolve, insofar as this is linked to a culture of performance, an overload of work in a context of accountability (Berg & Seeber, 2016 ; FQPPU; 2019 ) that is conducive to ethical distress and even burnout.

If the results of the study are generally in line with those of previous publications on the subject, our findings also bring new elements to the discussion while complementing those already documented. In particular, our results highlight the role of systemic injustices – be they social, distributive or epistemic – within the environments in which research is carried out, at least in Canada. To summarize, the results of our study point to the fact that the relationships between researchers and research participants are likely still to raise worrying ethical issues, despite widely accepted research ethics norms and institutionalized review processes. Further, the context in which research is carried out is not only conducive to breaches of ethical norms and instances of misbehaviour or misconduct, but also likely to be significantly detrimental to the health and well-being of researchers, as well as research assistants. Another element that our research also highlighted is the instrumentalization and even exploitation of students and research assistants, which is another important and worrying social injustice given the inevitable power imbalances between students and researchers.

Moreover, in a context in which ethical issues are often discussed from a micro perspective, our study helps shed light on both the micro- and macro-level ethical dimensions of research (Bronfenbrenner, 1979 ; Glaser 1994 ). However, given that ethical issues in research are not only diverse, but also and above all complex, a broader perspective that encompasses the interplay between the micro and macro dimensions can enable a better understanding of these issues and thereby support the identification of the multiple factors that may be at their origin. Triangulating the perspectives of researchers with those of REB members and research ethics experts enabled us to bring these elements to light, and thus to step back from and critique the way that research is currently conducted. To this end, attention to socio-political elements such as the performance culture in academia or how research funds are distributed, and according to what explicit and implicit criteria, can contribute to identifying the sources of the ethical issues described above.

Contemporary culture characterized by the social acceleration

The German sociologist and philosopher Rosa (2010) argues that late modernity – that is, the period between the 1980s and today – is characterized by a phenomenon of social acceleration that causes various forms of alienation in our relationship to time, space, actions, things, others and ourselves. Rosa distinguishes three types of acceleration: technical acceleration , the acceleration of social changes and the acceleration of the rhythm of life . According to Rosa, social acceleration is the main problem of late modernity, in that the invisible social norm of doing more and faster to supposedly save time operates unchallenged at all levels of individual and collective life, as well as organizational and social life. Although we all, researchers and non-researchers alike, perceive this unspoken pressure to be ever more productive, the process of social acceleration as a new invisible social norm is our blind spot, a kind of tyrant over which we have little control. This conceptualization of the contemporary culture can help us to understand the context in which research is conducted (like other professional practices). To this end, Berg & Seeber ( 2016 ) invite faculty researchers to slow down in order to better reflect and, in the process, take care of their health and their relationships with their colleagues and students. Many women professors encourage their fellow researchers, especially young women researchers, to learn to “say No” in order to protect their mental and physical health and to remain in their academic careers (Allaire & Descheneux, 2022 ). These authors also remind us of the relevance of Kahneman’s ( 2012 ) work which demonstrates that it takes time to think analytically, thoroughly, and logically. Conversely, thinking quickly exposes humans to cognitive and implicit biases that then lead to errors in thinking (e.g., in the analysis of one’s own research data or in the evaluation of grant applications or student curriculum vitae). The phenomenon of social acceleration, which pushes the researcher to think faster and faster, is likely to lead to unethical bad science that can potentially harm humankind. In sum, Rosa’s invitation to contemporary critical theorists to seriously consider the problem of social acceleration is particularly insightful to better understand the ethical issues of research. It provides a lens through which to view the toxic context in which research is conducted today, and one that was shared by the participants in our study.

Clark & Sousa ( 2022 ) note, it is important that other criteria than the volume of researchers’ contributions be valued in research, notably quality. Ultimately, it is the value of the knowledge produced and its influence on the concrete lives of humans and other living beings that matters, not the quantity of publications. An interesting articulation of this view in research governance is seen in a change in practice by Australia’s national health research funder: they now restrict researchers to listing on their curriculum vitae only the top ten publications from the past ten years (rather than all of their publications), in order to evaluate the quality of contributions rather than their quantity. To create environments conducive to the development of quality research, it is important to challenge the phenomenon of social acceleration, which insidiously imposes a quantitative normativity that is both alienating and detrimental to the quality and ethical conduct of research. Based on our experience, we observe that the social norm of acceleration actively disfavours the conduct of empirical research on ethics in research. The fact is that researchers are so busy that it is almost impossible for them to find time to participate in such studies. Further, operating in highly competitive environments, while trying to respect the values and ethical principles of research, creates ethical paradoxes for members of the research community. According to Malherbe ( 1999 ), an ethical paradox is a situation where an individual is confronted by contradictory injunctions (i.e., do more, faster, and better). And eventually, ethical paradoxes lead individuals to situations of distress and burnout, or even to ethical failures (i.e., misbehaviour or misconduct) in the face of the impossibility of responding to contradictory injunctions.

Strengths and Limitations of the study

The triangulation of perceptions and experiences of different actors involved in research is a strength of our study. While there are many studies on the experiences of researchers, rarely are members of REBs and experts in research ethics given the space to discuss their views of what are ethical issues. Giving each of these stakeholders a voice and comparing their different points of view helped shed a different and complementary light on the ethical issues that occur in research. That said, it would have been helpful to also give more space to issues experienced by students or research assistants, as the relationships between researchers and research assistants are at times very worrying, as noted by a participant, and much work still needs to be done to eliminate the exploitative situations that seem to prevail in certain research settings. In addition, no Indigenous or gender diverse researchers participated in the study. Given the ethical issues and systemic injustices that many people from these groups face in Canada (Drolet & Goulet, 2018 ; Nicole & Drolet, in press ), research that gives voice to these researchers would be relevant and contribute to knowledge development, and hopefully also to change in research culture.

Further, although most of the ethical issues discussed in this article may be transferable to the realities experienced by researchers in other countries, the epistemic injustice reported by Francophone researchers who persist in doing research in French in Canada – which is an officially bilingual country but in practice is predominantly English – is likely specific to the Canadian reality. In addition, and as mentioned above, recruitment proved exceedingly difficult, particularly amongst researchers. Despite this difficulty, we obtained data saturation for all but two themes – i.e., exploitation of students and ethical issues of research that uses animals. It follows that further empirical research is needed to improve our understanding of these specific issues, as they may diverge to some extent from those documented here and will likely vary across countries and academic research contexts.

Conclusions

This study, which gave voice to researchers, REB members, and ethics experts, reveals that the ethical issues in research are related to several problematic elements as power imbalances and authority relations. Researchers and research assistants are subject to external pressures that give rise to integrity issues, among others ethical issues. Moreover, the current context of social acceleration influences the definition of the performance indicators valued in academic institutions and has led their members to face several ethical issues, including social, distributive, and epistemic injustices, at different steps of the research process. In this study, ten categories of ethical issues were identified, described and illustrated: (1) research integrity, (2) conflicts of interest, (3) respect for research participants, (4) lack of supervision and power imbalances, (5) individualism and performance, (6) inadequate ethical guidance, (7) social injustices, (8) distributive injustices, (9) epistemic injustices, and (10) ethical distress. The triangulation of the perspectives of different members (i.e., researchers from different disciplines, REB members, research ethics experts, and one research assistant) involved in the research process made it possible to lift the veil on some of these ethical issues. Further, it enabled the identification of additional ethical issues, especially systemic injustices experienced in research. To our knowledge, this is the first time that these injustices (social, distributive, and epistemic injustices) have been clearly identified.

Finally, this study brought to the fore several problematic elements that are important to address if the research community is to develop and implement the solutions needed to resolve the diverse and transversal ethical issues that arise in research institutions. A good starting point is the rejection of the corollary norms of “publish or perish” and “do more, faster, and better” and their replacement with “publish quality instead of quantity”, which necessarily entails “do less, slower, and better”. It is also important to pay more attention to the systemic injustices within which researchers work, because these have the potential to significantly harm the academic careers of many researchers, including women researchers, early career researchers, and those belonging to racialized groups as well as the health, well-being, and respect of students and research participants.

Acknowledgements

The team warmly thanks the participants who took part in the research and who made this study possible. Marie-Josée Drolet thanks the five research assistants who participated in the data collection and analysis: Julie-Claude Leblanc, Élie Beauchemin, Pénéloppe Bernier, Louis-Pierre Côté, and Eugénie Rose-Derouin, all students at the Université du Québec à Trois-Rivières (UQTR), two of whom were active in the writing of this article. MJ Drolet and Bryn Williams-Jones also acknowledge the financial contribution of the Social Sciences and Humanities Research Council of Canada (SSHRC), which supported this research through a grant. We would also like to thank the reviewers of this article who helped us improve it, especially by clarifying and refining our ideas.

Competing Interests and Funding

As noted in the Acknowledgements, this research was supported financially by the Social Sciences and Humanities Research Council of Canada (SSHRC).

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

  • Al-Hidabi, Abdulmalek, M. D., & The, P. L. (2018). Multiple Publications: The Main Reason for the Retraction of Papers in Computer Science. In K. Arai, S. Kapoor, & R. Bhatia (eds), Future of Information and Communication Conference (FICC): Advances in Information and Communication, Advances in Intelligent Systems and Computing (AISC), Springer, vol. 886, pp. 511–526
  • Allaire, S., & Deschenaux, F. (2022). Récits de professeurs d’université à mi-carrière. Si c’était à refaire… . Presses de l’Université du Québec
  • Aristotle . Aristotle’s Nicomachean Ethics. Chicago: The University of Chicago Press; 2012. [ Google Scholar ]
  • Bahn S. Keeping Academic Field Researchers Safe: Ethical Safeguards. Journal of Academic Ethics. 2012; 10 :83–91. doi: 10.1007/s10805-012-9159-2. [ CrossRef ] [ Google Scholar ]
  • Balk DE. Bereavement Research Using Control Groups: Ethical Obligations and Questions. Death Studies. 1995; 19 :123–138. doi: 10.1080/07481189508252720. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Beauchemin, É., Côté, L. P., Drolet, M. J., & Williams-Jones, B. (2021). Conceptualizing Ethical Issues in the Conduct of Research: Results from a Critical and Systematic Literature Review. Journal of Academic Ethics , Early Online. 10.1007/s10805-021-09411-7
  • Berg, M., & Seeber, B. K. (2016). The Slow Professor . University of Toronto Press
  • Birchley G, Huxtable R, Murtagh M, Meulen RT, Flach P, Gooberman-Hill R. Smart homes, private homes? An empirical study of technology researchers’ perceptions of ethical issues in developing smart-home health technologies. BMC Medical Ethics. 2017; 18 (23):1–13. doi: 10.1186/s12910-017-0183-z. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Blais, J., & Drolet, M. J. (2022). Les injustices sociales vécues en camp de réfugiés: les comprendre pour mieux intervenir auprès de personnes ayant séjourné dans un camp de réfugiés. Recueil annuel belge d’ergothérapie , 14, 37–48
  • Bogdan, R. C., & Biklen, S. K. (2006). Qualitative research in education: An introduction to theory and methods . Allyn & Bacon
  • Bouffard C. Le développement des pratiques de la génétique médicale et la construction des normes bioéthiques. Anthropologie et Sociétés. 2000; 24 (2):73–90. doi: 10.7202/015650ar. [ CrossRef ] [ Google Scholar ]
  • Bronfenbrenner, U. (1979). The Ecology of Human development. Experiments by nature and design . Harvard University Press
  • Bruhn JG, Zajac G, Al-Kazemi AA, Prescott LD. Moral positions and academic conduct: Parameters of tolerance for ethics failure. Journal of Higher Education. 2002; 73 (4):461–493. doi: 10.1353/jhe.2002.0033. [ CrossRef ] [ Google Scholar ]
  • Clark, A., & Sousa (2022). It’s time to end Canada’s obsession with research quantity. University Affairs/Affaires universitaires , February 14th. https://www.universityaffairs.ca/career-advice/effective-successfull-happy-academic/its-time-to-end-canadas-obsession-with-research-quantity/?utm_source=University+Affairs+e-newsletter&utm_campaign=276a847f 70-EMAIL_CAMPAIGN_2022_02_16&utm_medium=email&utm_term=0_314bc2ee29-276a847f70-425259989
  • Colnerud G. Ethical dilemmas in research in relation to ethical review: An empirical study. Research Ethics. 2015; 10 (4):238–253. doi: 10.1177/1747016114552339. [ CrossRef ] [ Google Scholar ]
  • Davison J. Dilemmas in Research: Issues of Vulnerability and Disempowerment for the Social Workers/Researcher. Journal of Social Work Practice. 2004; 18 (3):379–393. doi: 10.1080/0265053042000314447. [ CrossRef ] [ Google Scholar ]
  • DePoy E, Gitlin LN. Introduction to Research. St. Louis: Elsevier Mosby; 2010. [ Google Scholar ]
  • Drolet, M. J., & Goulet, M. (2018). Travailler avec des patients autochtones du Canada ? Perceptions d’ergothérapeutes du Québec des enjeux éthiques de cette pratique. Recueil annuel belge francophone d’ergothérapie , 10 , 25–56
  • Drolet MJ, Girard K. Les enjeux éthiques de la recherche en ergothérapie: un portrait préoccupant. Revue canadienne de bioéthique. 2020; 3 (3):21–40. doi: 10.7202/1073779ar. [ CrossRef ] [ Google Scholar ]
  • Drolet MJ, Girard K, Gaudet R. Les enjeux éthiques de l’enseignement en ergothérapie: des injustices au sein des départements universitaires. Revue canadienne de bioéthique. 2020; 3 (1):22–36. [ Google Scholar ]
  • Drolet MJ, Maclure J. Les enjeux éthiques de la pratique de l’ergothérapie: perceptions d’ergothérapeutes. Revue Approches inductives. 2016; 3 (2):166–196. doi: 10.7202/1037918ar. [ CrossRef ] [ Google Scholar ]
  • Drolet MJ, Pinard C, Gaudet R. Les enjeux éthiques de la pratique privée: des ergothérapeutes du Québec lancent un cri d’alarme. Ethica – Revue interdisciplinaire de recherche en éthique. 2017; 21 (2):173–209. [ Google Scholar ]
  • Drolet MJ, Ruest M. De l’éthique à l’ergothérapie: un cadre théorique et une méthode pour soutenir la pratique professionnelle. Québec: Presses de l’Université du Québec; 2021. [ Google Scholar ]
  • Drolet, M. J., & Ruest, M. (accepted). Quels sont les enjeux éthiques soulevés par la recherche scientifique? In M. Lalancette & J. Luckerhoff (dir). Initiation au travail intellectuel et à la recherche . Québec: Presses de l’Université du Québec, 18 p
  • Drolet MJ, Sauvageau A, Baril N, Gaudet R. Les enjeux éthiques de la formation clinique en ergothérapie. Revue Approches inductives. 2019; 6 (1):148–179. doi: 10.7202/1060048ar. [ CrossRef ] [ Google Scholar ]
  • Fédération québécoise des professeures et des professeurs d’université (FQPPU) Enquête nationale sur la surcharge administrative du corps professoral universitaire québécois. Principaux résultats et pistes d’action. Montréal: FQPPU; 2019. [ Google Scholar ]
  • Fortin MH. Fondements et étapes du processus de recherche. Méthodes quantitatives et qualitatives. Montréal, QC: Chenelière éducation; 2010. [ Google Scholar ]
  • Fraser DM. Ethical dilemmas and practical problems for the practitioner researcher. Educational Action Research. 1997; 5 (1):161–171. doi: 10.1080/09650799700200014. [ CrossRef ] [ Google Scholar ]
  • Fraser, N. (2011). Qu’est-ce que la justice sociale? Reconnaissance et redistribution . La Découverte
  • Fricker, M. (2007). Epistemic Injustice: Power and the Ethics of Knowing . Oxford University Press
  • Giorgi A, et al. De la méthode phénoménologique utilisée comme mode de recherche qualitative en sciences humaines: théories, pratique et évaluation. In: Poupart J, Groulx LH, Deslauriers JP, et al., editors. La recherche qualitative: enjeux épistémologiques et méthodologiques. Boucherville, QC: Gaëtan Morin; 1997. pp. 341–364. [ Google Scholar ]
  • Giorgini V, Mecca JT, Gibson C, Medeiros K, Mumford MD, Connelly S, Devenport LD. Researcher Perceptions of Ethical Guidelines and Codes of Conduct. Accountability in Research. 2016; 22 (3):123–138. doi: 10.1080/08989621.2014.955607. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Glaser, J. W. (1994). Three realms of ethics: Individual, institutional, societal. Theoretical model and case studies . Kansas Cuty, Sheed & Ward
  • Godrie B, Dos Santos M. Présentation: inégalités sociales, production des savoirs et de l’ignorance. Sociologie et sociétés. 2017; 49 (1):7. doi: 10.7202/1042804ar. [ CrossRef ] [ Google Scholar ]
  • Hammell KW, Carpenter C, Dyck I. Using Qualitative Research: A Practical Introduction for Occupational and Physical Therapists. Edinburgh: Churchill Livingstone; 2000. [ Google Scholar ]
  • Henderson M, Johnson NF, Auld G. Silences of ethical practice: dilemmas for researchers using social media. Educational Research and Evaluation. 2013; 19 (6):546–560. doi: 10.1080/13803611.2013.805656. [ CrossRef ] [ Google Scholar ]
  • Husserl E. The crisis of European sciences and transcendental phenomenology. Evanston, IL: Northwestern University Press; 1970. [ Google Scholar ]
  • Husserl E. The train of thoughts in the lectures. In: Polifroni EC, Welch M, editors. Perspectives on Philosophy of Science in Nursing. Philadelphia, PA: Lippincott; 1999. [ Google Scholar ]
  • Hunt SD, Chonko LB, Wilcox JB. Ethical problems of marketing researchers. Journal of Marketing Research. 1984; 21 :309–324. doi: 10.1177/002224378402100308. [ CrossRef ] [ Google Scholar ]
  • Hunt MR, Carnevale FA. Moral experience: A framework for bioethics research. Journal of Medical Ethics. 2011; 37 (11):658–662. doi: 10.1136/jme.2010.039008. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Jameton, A. (1984). Nursing practice: The ethical issues . Englewood Cliffs, Prentice-Hall
  • Jarvis K. Dilemmas in International Research and the Value of Practical Wisdom. Developing World Bioethics. 2017; 17 (1):50–58. doi: 10.1111/dewb.12121. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kahneman D. Système 1, système 2: les deux vitesses de la pensée. Paris: Flammarion; 2012. [ Google Scholar ]
  • Keogh B, Daly L. The ethics of conducting research with mental health service users. British Journal of Nursing. 2009; 18 (5):277–281. doi: 10.12968/bjon.2009.18.5.40539. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lierville AL, Grou C, Pelletier JF. Enjeux éthiques potentiels liés aux partenariats patients en psychiatrie: État de situation à l’Institut universitaire en santé mentale de Montréal. Santé mentale au Québec. 2015; 40 (1):119–134. doi: 10.7202/1032386ar. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lynöe N, Sandlund M, Jacobsson L. Research ethics committees: A comparative study of assessment of ethical dilemmas. Scandinavian Journal of Public Health. 1999; 27 (2):152–159. doi: 10.1177/14034948990270020401. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Malherbe JF. Compromis, dilemmes et paradoxes en éthique clinique. Anjou: Éditions Fides; 1999. [ Google Scholar ]
  • McGinn R. Discernment and denial: Nanotechnology researchers’ recognition of ethical responsibilities related to their work. NanoEthics. 2013; 7 :93–105. doi: 10.1007/s11569-013-0174-6. [ CrossRef ] [ Google Scholar ]
  • Mills, C. W. (2017). Black Rights / White rongs. The Critique of Racial Liberalism . Oxford University Press
  • Miyazaki AD, Taylor KA. Researcher interaction biases and business ethics research: Respondent reactions to researcher characteristics. Journal of Business Ethics. 2008; 81 (4):779–795. doi: 10.1007/s10551-007-9547-5. [ CrossRef ] [ Google Scholar ]
  • Mondain N, Bologo E. L’intentionnalité du chercheur dans ses pratiques de production des connaissances: les enjeux soulevés par la construction des données en démographie et santé en Afrique. Cahiers de recherche sociologique. 2009; 48 :175–204. doi: 10.7202/039772ar. [ CrossRef ] [ Google Scholar ]
  • Nicole, M., & Drolet, M. J. (in press). Fitting transphobia and cisgenderism in occupational therapy, Occupational Therapy Now
  • Pope KS, Vetter VA. Ethical dilemmas encountered by members of the American Psychological Association: A national survey. The American Psychologist. 1992; 47 (3):397–411. doi: 10.1037/0003-066X.47.3.397. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Provencher V, Mortenson WB, Tanguay-Garneau L, Bélanger K, Dagenais M. Challenges and strategies pertaining to recruitment and retention of frail elderly in research studies: A systematic review. Archives of Gerontology and Geriatrics. 2014; 59 (1):18–24. doi: 10.1016/j.archger.2014.03.006. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rawls, J. (1971). A Theory of Justice . Harvard University Press
  • Resnik DB, Elliott KC. The Ethical Challenges of Socially Responsible Science. Accountability in Research. 2016; 23 (1):31–46. doi: 10.1080/08989621.2014.1002608. [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rosa, H. (2010). Accélération et aliénation. Vers une théorie critique de la modernité tardive . Paris, Découverte
  • Sen, A. K. (2011). The Idea of Justice . The Belknap Press of Harvard University Press
  • Sen, A. K. (1995). Inegality Reexaminated . Oxford University Press
  • Sieber JE. Empirical Research on Research Ethics. Ethics & Behavior. 2004; 14 (4):397–412. doi: 10.1207/s15327019eb1404_9. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sigmon ST. Ethical practices and beliefs of psychopathology researchers. Ethics & Behavior. 1995; 5 (4):295–309. doi: 10.1207/s15327019eb0504_1. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Swazey JP, Anderson MS, Lewis KS. Ethical Problems in Academic Research. American Scientist. 1993; 81 (6):542–553. [ Google Scholar ]
  • Swisher LL, Arsalanian LE, Davis CM. The realm-individual-process-situation (RIPS) model of ethical decision-making. HPA Resource. 2005; 5 (3):3–8. [ Google Scholar ]
  • Tri-Council Policy Statement (TCPS2) (2018). Ethical Conduct for Research Involving Humans . Government of Canada, Secretariat on Responsible Conduct of Research. https://ethics.gc.ca/eng/documents/tcps2-2018-en-interactive-final.pdf
  • Thomas SP, Pollio HR. Listening to Patients: A Phenomenological Approach to Nursing Research and Practice. New York: Springer Publishing Company; 2002. [ Google Scholar ]
  • Wiegand DL, Funk M. Consequences of clinical situations that cause critical care nurses to experience moral distress. Nursing Ethics. 2012; 19 (4):479–487. doi: 10.1177/0969733011429342. [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Williams-Jones B, Potvin MJ, Mathieu G, Smith E. Barriers to research on research ethics review and conflicts of interest. IRB: Ethics & Human Research. 2013; 35 (5):14–20. [ PubMed ] [ Google Scholar ]
  • Young, I. M. (2011). Justice and the Politics of difference . Princeton University Press
  • Search Menu
  • Author Guidelines
  • Submission Site
  • Open Access
  • About International Studies Review
  • About the International Studies Association
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

Ai: a global governance challenge, empirical perspectives, normative perspectives, acknowledgement, conflict of interest.

  • < Previous

The Global Governance of Artificial Intelligence: Next Steps for Empirical and Normative Research

ORCID logo

  • Article contents
  • Figures & tables
  • Supplementary Data

Jonas Tallberg, Eva Erman, Markus Furendal, Johannes Geith, Mark Klamberg, Magnus Lundgren, The Global Governance of Artificial Intelligence: Next Steps for Empirical and Normative Research, International Studies Review , Volume 25, Issue 3, September 2023, viad040, https://doi.org/10.1093/isr/viad040

  • Permissions Icon Permissions

Artificial intelligence (AI) represents a technological upheaval with the potential to change human society. Because of its transformative potential, AI is increasingly becoming subject to regulatory initiatives at the global level. Yet, so far, scholarship in political science and international relations has focused more on AI applications than on the emerging architecture of global AI regulation. The purpose of this article is to outline an agenda for research into the global governance of AI. The article distinguishes between two broad perspectives: an empirical approach, aimed at mapping and explaining global AI governance; and a normative approach, aimed at developing and applying standards for appropriate global AI governance. The two approaches offer questions, concepts, and theories that are helpful in gaining an understanding of the emerging global governance of AI. Conversely, exploring AI as a regulatory issue offers a critical opportunity to refine existing general approaches to the study of global governance.

La inteligencia artificial (IA) representa una revolución tecnológica que tiene el potencial de poder cambiar la sociedad humana. Debido a este potencial transformador, la IA está cada vez más sujeta a iniciativas reguladoras a nivel global. Sin embargo, hasta ahora, el mundo académico en el área de las ciencias políticas y las relaciones internacionales se ha centrado más en las aplicaciones de la IA que en la arquitectura emergente de la regulación global en materia de IA. El propósito de este artículo es esbozar una agenda para la investigación sobre la gobernanza global en materia de IA. El artículo distingue entre dos amplias perspectivas: por un lado, un enfoque empírico, destinado a mapear y explicar la gobernanza global en materia de IA y, por otro lado, un enfoque normativo, destinado a desarrollar y a aplicar normas para una gobernanza global adecuada de la IA. Los dos enfoques ofrecen preguntas, conceptos y teorías que resultan útiles para comprender la gobernanza global emergente en materia de IA. Por el contrario, el hecho de estudiar la IA como si fuese una cuestión reguladora nos ofrece una oportunidad de gran relevancia para poder perfeccionar los enfoques generales existentes en el estudio de la gobernanza global.

L'intelligence artificielle (IA) constitue un bouleversement technologique qui pourrait bien changer la société humaine. À cause de son potentiel transformateur, l'IA fait de plus en plus l'objet d'initiatives réglementaires au niveau mondial. Pourtant, jusqu'ici, les chercheurs en sciences politiques et relations internationales se sont davantage concentrés sur les applications de l'IA que sur l’émergence de l'architecture de la réglementation mondiale de l'IA. Cet article vise à exposer les grandes lignes d'un programme de recherche sur la gouvernance mondiale de l'IA. Il fait la distinction entre deux perspectives larges : une approche empirique, qui vise à représenter et expliquer la gouvernance mondiale de l'IA; et une approche normative, qui vise à mettre au point et appliquer les normes d'une gouvernance mondiale de l'IA adéquate. Les deux approches proposent des questions, des concepts et des théories qui permettent de mieux comprendre l’émergence de la gouvernance mondiale de l'IA. À l'inverse, envisager l'IA telle une problématique réglementaire présente une opportunité critique d'affiner les approches générales existantes de l’étude de la gouvernance mondiale.

Artificial intelligence (AI) represents a technological upheaval with the potential to transform human society. It is increasingly viewed by states, non-state actors, and international organizations (IOs) as an area of strategic importance, economic competition, and risk management. While AI development is concentrated to a handful of corporations in the United States, China, and Europe, the long-term consequences of AI implementation will be global. Autonomous weapons will have consequences for armed conflicts and power balances; automation will drive changes in job markets and global supply chains; generative AI will affect content production and challenge copyright systems; and competition around the scarce hardware needed to train AI systems will shape relations among both states and businesses. While the technology is still only lightly regulated, state and non-state actors are beginning to negotiate global rules and norms to harness and spread AI’s benefits while limiting its negative consequences. For example, in the past few years, the United Nations Educational, Scientific and Cultural Organization (UNESCO) adopted recommendations on the ethics of AI, the European Union (EU) negotiated comprehensive AI legislation, and the Group of Seven (G7) called for developing global technical standards on AI.

Our purpose in this article is to outline an agenda for research into the global governance of AI. 1 Advancing research on the global regulation of AI is imperative. The rules and arrangements that are currently being developed to regulate AI will have a considerable impact on power differentials, the distribution of economic value, and the political legitimacy of AI governance for years to come. Yet there is currently little systematic knowledge on the nature of global AI regulation, the interests influential in this process, and the extent to which emerging arrangements can manage AI’s consequences in a just and democratic manner. While poised for rapid expansion, research on the global governance of AI remains in its early stages (but see Maas 2021 ; Schmitt 2021 ).

This article complements earlier calls for research on AI governance in general ( Dafoe 2018 ; Butcher and Beridze 2019 ; Taeihagh 2021 ; Büthe et al. 2022 ) by focusing specifically on the need for systematic research into the global governance of AI. It submits that global efforts to regulate AI have reached a stage where it is necessary to start asking fundamental questions about the characteristics, sources, and consequences of these governance arrangements.

We distinguish between two broad approaches for studying the global governance of AI: an empirical perspective, informed by a positive ambition to map and explain AI governance arrangements; and a normative perspective, informed by philosophical standards for evaluating the appropriateness of AI governance arrangements. Both perspectives build on established traditions of research in political science, international relations (IR), and political philosophy, and offer questions, concepts, and theories that are helpful as we try to better understand new types of governance in world politics.

We argue that empirical and normative perspectives together offer a comprehensive agenda of research on the global governance of AI. Pursuing this agenda will help us to better understand characteristics, sources, and consequences of the global regulation of AI, with potential implications for policymaking. Conversely, exploring AI as a regulatory issue offers a critical opportunity to further develop concepts and theories of global governance as they confront the particularities of regulatory dynamics in this important area.

We advance this argument in three steps. First, we argue that AI, because of its economic, political, and social consequences, presents a range of governance challenges. While these challenges initially were taken up mainly by national authorities, recent years have seen a dramatic increase in governance initiatives by IOs. These efforts to regulate AI at global and regional levels are likely driven by several considerations, among them AI applications creating cross-border externalities that demand international cooperation and AI development taking place through transnational processes requiring transboundary regulation. Yet, so far, existing scholarship on the global governance of AI has been mainly descriptive or policy-oriented, rather than focused on theory-driven positive and normative questions.

Second, we argue that an empirical perspective can help to shed light on key questions about characteristics and sources of the global governance of AI. Based on existing concepts, the emerging governance architecture for AI can be described as a regime complex—a structure of partially overlapping and diverse governance arrangements without a clearly defined central institution or hierarchy. IR theories are useful in directing our attention to the role of power, interests, ideas, and non-state actors in the construction of this regime complex. At the same time, the specific conditions of AI governance suggest ways in which global governance theories may be usefully developed.

Third, we argue that a normative perspective raises crucial questions regarding the nature and implications of global AI governance. These questions pertain both to procedure (the process for developing rules) and to outcome (the implications of those rules). A normative perspective suggests that procedures and outcomes in global AI governance need to be evaluated in terms of how they meet relevant normative ideals, such as democracy and justice. How could the global governance of AI be organized to live up to these ideals? To what extent are emerging arrangements minimally democratic and fair in their procedures and outcomes? Conversely, the global governance of AI raises novel questions for normative theorizing, for instance, by invoking aims for AI to be “trustworthy,” “value aligned,” and “human centered.”

Advancing this agenda of research is important for several reasons. First, making more systematic use of social science concepts and theories will help us to gain a better understanding of various dimensions of the global governance of AI. Second, as a novel case of governance involving unique features, AI raises questions that will require us to further refine existing concepts and theories of global governance. Third, findings from this research agenda will be of importance for policymakers, by providing them with evidence on international regulatory gaps, the interests that have influenced current arrangements, and the normative issues at stake when developing this regime complex going forward.

The remainder of this article is structured in three substantive sections. The first section explains why AI has become a concern of global governance. The second section suggests that an empirical perspective can help to shed light on the characteristics and drivers of the global governance of AI. The third section discusses the normative challenges posed by global AI governance, focusing specifically on concerns related to democracy and justice. The article ends with a conclusion that summarizes our proposed agenda for future research on the global governance of AI.

Why does AI pose a global governance challenge? In this section, we answer this question in three steps. We begin by briefly describing the spread of AI technology in society, then illustrate the attempts to regulate AI at various levels of governance, and finally explain why global regulatory initiatives are becoming increasingly common. We argue that the growth of global governance initiatives in this area stems from AI applications creating cross-border externalities that demand international cooperation and from AI development taking place through transnational processes requiring transboundary regulation.

Due to its amorphous nature, AI escapes easy definition. Instead, the definition of AI tends to depend on the purposes and audiences of the research ( Russell and Norvig 2020 ). In the most basic sense, machines are considered intelligent when they can perform tasks that would require intelligence if done by humans ( McCarthy et al. 1955 ). This could happen through the guiding hand of humans, in “expert systems” that follow complex decision trees. It could also happen through “machine learning,” where AI systems are trained to categorize texts, images, sounds, and other data, using such categorizations to make autonomous decisions when confronted with new data. More specific definitions require that machines display a level of autonomy and capacity for learning that enables rational action. For instance, the EU’s High-Level Expert Group on AI has defined AI as “systems that display intelligent behaviour by analysing their environment and taking actions—with some degree of autonomy—to achieve specific goals” (2019, 1). Yet, illustrating the potential for conceptual controversy, this definition has been criticized for denoting both too many and too few technologies as AI ( Heikkilä 2022a ).

AI technology is already implemented in a wide variety of areas in everyday life and the economy at large. For instance, the conversational chatbot ChatGPT is estimated to have reached 100 million users just  two months after its launch at the end of 2022 ( Hu 2023 ). AI applications enable new automation technologies, with subsequent positive or negative effects on the demand for labor, employment, and economic equality ( Acemoglu and Restrepo 2020 ). Military AI is integral to lethal autonomous weapons systems (LAWS), whereby machines take autonomous decisions in warfare and battlefield targeting ( Rosert and Sauer 2018 ). Many governments and public agencies have already implemented AI in their daily operations in order to more efficiently evaluate welfare eligibility, flag potential fraud, profile suspects, make risk assessments, and engage in mass surveillance ( Saif et al. 2017 ; Powers and Ganascia 2020 ; Berk 2021 ; Misuraca and van Noordt 2022 , 38).

Societies face significant governance challenges in relation to the implementation of AI. One type of challenge arises when AI systems function poorly, such as when applications involving some degree of autonomous decision-making produce technical failures with real-world implications. The “Robodebt” scheme in Australia, for instance, was designed to detect mistaken social security payments, but the Australian government ultimately had to rescind 400,000 wrongfully issued welfare debts ( Henriques-Gomes 2020 ). Similarly, Dutch authorities recently implemented an algorithm that pushed tens of thousands of families into poverty after mistakenly requiring them to repay child benefits, ultimately forcing the government to resign ( Heikkilä 2022b ).

Another type of governance challenge arises when AI systems function as intended but produce impacts whose consequences may be regarded as problematic. For instance, the inherent opacity of AI decision-making challenges expectations on transparency and accountability in public decision-making in liberal democracies ( Burrell 2016 ; Erman and Furendal 2022a ). Autonomous weapons raise critical ethical and legal issues ( Rosert and Sauer 2019 ). AI applications for surveillance in law enforcement give rise to concerns of individual privacy and human rights ( Rademacher 2019 ). AI-driven automation involves changes in labor markets that are painful for parts of the population ( Acemoglu and Restrepo 2020 ). Generative AI upends conventional ways of producing creative content and raises new copyright and data security issues ( Metz 2022 ).

More broadly, AI presents a governance challenge due to its effects on economic competitiveness, military security, and personal integrity, with consequences for states and societies. In this respect, AI may not be radically different from earlier general-purpose technologies, such as the steam engine, electricity, nuclear power, and the internet ( Frey 2019 ). From this perspective, it is not the novelty of AI technology that makes it a pressing issue to regulate but rather the anticipation that AI will lead to large-scale changes and become a source of power for state and societal actors.

Challenges such as these have led to a rapid expansion in recent years of efforts to regulate AI at different levels of governance. The OECD AI Policy Observatory records more than 700 national AI policy initiatives from 60 countries and territories ( OECD 2021 ). Earlier research into the governance of AI has therefore naturally focused mostly on the national level ( Radu 2021 ; Roberts et al. 2021 ; Taeihagh 2021 ). However, a large number of governance initiatives have also been undertaken at the global level, and many more are underway. According to an ongoing inventory of AI regulatory initiatives by the Council of Europe, IOs overtook national authorities as the main source of such initiatives in 2020 ( Council of Europe 2023 ).  Figure 1 visualizes this trend.

Origins of AI governance initiatives, 2015–2022. Source: Council of Europe (2023).

Origins of AI governance initiatives, 2015–2022. Source : Council of Europe (2023 ).

According to this source, national authorities launched 170 initiatives from 2015 to 2022, while IOs put in place 210 initiatives during the same period. Over time, the share of regulatory initiatives emanating from IOs has thus grown to surpass the share resulting from national authorities. Examples of the former include the OECD Principles on Artificial Intelligence agreed in 2019, the UNESCO Recommendation on Ethics of AI adopted in 2021, and the EU’s ongoing negotiations on the EU AI Act. In addition, several governance initiatives emanate from the private sector, civil society, and multistakeholder partnerships. In the next section, we will provide a more developed characterization of these global regulatory initiatives.

Two concerns likely explain why AI increasingly is becoming subject to governance at the global level. First, AI creates externalities that do not follow national borders and whose regulation requires international cooperation. China’s Artificial Intelligence Development Plan, for instance, clearly states that the country is using AI as a leapfrog technology in order to enhance national competitiveness ( Roberts et al. 2021 ). Since states with less regulation might gain a competitive edge when developing certain AI applications, there is a risk that such strategies create a regulatory race to the bottom. International cooperation that creates a level playing field could thus be said to be in the interest of all parties.

Second, the development of AI technology is a cross-border process carried out by transnational actors—multinational firms in particular. Big tech corporations, such as Google, Meta, or the Chinese drone maker DJI, are investing vast sums into AI development. The innovations of hardware manufacturers like Nvidia enable breakthroughs but depend on complex global supply chains, and international research labs such as DeepMind regularly present cutting-edge AI applications. Since the private actors that develop AI can operate across multiple national jurisdictions, the efforts to regulate AI development and deployment also need to be transboundary. Only by introducing common rules can states ensure that AI businesses encounter similar regulatory environments, which both facilitates transboundary AI development and reduces incentives for companies to shift to countries with laxer regulation.

Successful global governance of AI could help realize many of the potential benefits of the technology while mitigating its negative consequences. For AI to contribute to increased economic productivity, for instance, there needs to be predictable and clear regulation as well as global coordination around standards that prevent competition between parallel technical systems. Conversely, a failure to provide suitable global governance could lead to substantial risks. The intentional misuse of AI technology may undermine trust in institutions, and if left unchecked, the positive and negative externalities created by automation technologies might fall unevenly across different groups. Race dynamics similar to those that arose around nuclear technology in the twentieth century—where technological leadership created large benefits—might lead international actors and private firms to overlook safety issues and create potentially dangerous AI applications ( Dafoe 2018 ; Future of Life Institute 2023 ). Hence, policymakers face the task of disentangling beneficial from malicious consequences and then foster the former while regulating the latter. Given the speed at which AI is developed and implemented, governance also risks constantly being one step behind the technological frontier.

A prime example of how AI presents a global governance challenge is the efforts to regulate military AI, in particular autonomous weapons capable of identifying and eliminating a target without the involvement of a remote human operator ( Hernandez 2021 ). Both the development and the deployment of military applications with autonomous capabilities transcend national borders. Multinational defense companies are at the forefront of developing autonomous weapons systems. Reports suggest that such autonomous weapons are now beginning to be used in armed conflicts ( Trager and Luca 2022 ). The development and deployment of autonomous weapons involve the types of competitive dynamics and transboundary consequences identified above. In addition, they raise specific concerns with respect to accountability and dehumanization ( Sparrow 2007 ; Stop Killer Robots 2023 ). For these reasons, states have begun to explore the potential for joint global regulation of autonomous weapons systems. The principal forum is the Group on Governmental Experts (GGE) within the Convention on Certain Conventional Weapons (CCW). Yet progress in these negotiations is slow as the major powers approach this issue with competing interests in mind, illustrating the challenges involved in developing joint global rules.

The example of autonomous weapons further illustrates how the global governance of AI raises urgent empirical and normative questions for research. On the empirical side, these developments invite researchers to map emerging regulatory initiatives, such as those within the CCW, and to explain why these particular frameworks become dominant. What are the principal characteristics of global regulatory initiatives in the area of autonomous weapons, and how do power differentials, interest constellations, and principled ideas influence those rules? On the normative side, these developments invite researchers to address key normative questions raised by the development and deployment of autonomous weapons. What are the key normative issues at stake in the regulation of autonomous weapons, both with respect to the process through which such rules are developed and with respect to the consequences of these frameworks? To what extent are existing normative ideals and frameworks, such as just war theory, applicable to the governing of military AI ( Roach and Eckert 2020 )? Despite the global governance challenge of AI development and use, research on this topic is still in its infancy (but see Maas 2021 ; Schmitt 2021 ). In the remainder of this article, we therefore present an agenda for research into the global governance of AI. We begin by outlining an agenda for positive empirical research on the global governance of AI and then suggest an agenda for normative philosophical research.

An empirical perspective on the global governance of AI suggests two main questions: How may we describe the emerging global governance of AI? And how may we explain the emerging global governance of AI? In this section, we argue that concepts and theories drawn from the general study of global governance will be helpful as we address these questions, but also that AI, conversely, raises novel issues that point to the need for new or refined theories. Specifically, we show how global AI governance may be mapped along several conceptual dimensions and submit that theories invoking power dynamics, interests, ideas, and non-state actors have explanatory promise.

Mapping AI Governance

A key priority for empirical research on the global governance of AI is descriptive: Where and how are new regulatory arrangements emerging at the global level? What features characterize the emergent regulatory landscape? In answering such questions, researchers can draw on scholarship on international law and IR, which have conceptualized mechanisms of regulatory change and drawn up analytical dimensions to map and categorize the resulting regulatory arrangements.

Any mapping exercise must consider the many different ways in global AI regulation may emerge and evolve. Previous research suggests that legal development may take place in at least three distinct ways. To begin with, existing rules could be reinterpreted to also cover AI ( Maas 2021 ). For example, the principles of distinction, proportionality, and precaution in international humanitarian law could be extended, via reinterpretation, to apply to LAWS, without changing the legal source. Another manner in which new AI regulation may appear is via “ add-ons ” to existing rules. For example, in the area of global regulation of autonomous vehicles, AI-related provisions were added to the 1968 Vienna Road Traffic Convention through an amendment in 2015 ( Kunz and Ó hÉigeartaigh 2020 ). Finally, AI regulation may appear as a completely new framework , either through new state behavior that results in customary international law or through a new legal act or treaty ( Maas 2021 , 96). Here, one example of regulating AI through a new framework is the aforementioned EU AI Act, which would take the form of a new EU regulation.

Once researchers have mapped emerging regulatory arrangements, a central task will be to categorize them. Prior scholarship suggests that regulatory arrangements may be fruitfully analyzed in terms of five key dimensions (cf. Koremenos et al. 2001 ; Wahlgren 2022 , 346–347). A first dimension is whether regulation is horizontal or vertical . A horizontal regulation covers several policy areas, whereas a vertical regulation is a delimited legal framework covering one specific policy area or application. In the field of AI, emergent governance appears to populate both ends of this spectrum. For example, the proposed EU AI Act (2021), the UNESCO Recommendations on the Ethics of AI (2021), and the OECD Principles on AI (2019), which are not specific to any particular AI application or field, would classify as attempts at horizontal regulation. When it comes to vertical regulation, there are fewer existing examples, but discussions on a new protocol on LAWS within the CCW signal that this type of regulation is likely to become more important in the future ( Maas 2019a ).

A second dimension runs from centralization to decentralization . Governance is centralized if there is a single, authoritative institution at the heart of a regime, such as in trade, where the World Trade Organization (WTO) fulfills this role. In contrast, decentralized arrangements are marked by parallel and partly overlapping institutions, such as in the governance of the environment, the internet, or genetic resources (cf. Raustiala and Victor 2004 ). While some IOs with universal membership, such as UNESCO, have taken initiatives relating to AI governance, no institution has assumed the role as the core regulatory body at the global level. Rather, the proliferation of parallel initiatives, across levels and regions, lends weight to the conclusion that contemporary arrangements for the global governance of AI are strongly decentralized ( Cihon et al. 2020a ).

A third dimension is the continuum from hard law to soft law . While domestic statutes and treaties may be described as hard law, soft law is associated with guidelines of conduct, recommendations, resolutions, standards, opinions, ethical principles, declarations, guidelines, board decisions, codes of conduct, negotiated agreements, and a large number of additional normative mechanisms ( Abbott and Snidal 2000 ; Wahlgren 2022 ). Even though such soft documents may initially have been drafted as non-legal texts, they may in actual practice acquire considerable strength in structuring international relations ( Orakhelashvili 2019 ). While some initiatives to regulate AI classify as hard law, including the EU’s AI Act, Burri (2017 ) suggests that AI governance is likely to be dominated by “supersoft law,” noting that there are currently numerous processes underway creating global standards outside traditional international law-making fora. In a phenomenon that might be described as “bottom-up law-making” ( Koven Levit 2017 ), states and IOs are bypassed, creating norms that defy traditional categories of international law ( Burri 2017 ).

A fourth dimension concerns private versus public regulation . The concept of private regulation overlaps partly with substance understood as soft law, to the extent that private actors develop non-binding guidelines ( Wahlgren 2022 ). Significant harmonization of standards may be developed by private standardization bodies, such as the IEEE ( Ebers 2022 ). Public authorities may regulate the responsibility of manufacturers through tort law and product liability law ( Greenstein 2022 ). Even though contracts are originally matters between private parties, some contractual matters may still be regulated and enforced by law ( Ubena 2022 ).

A fifth dimension relates to the division between military and non-military regulation . Several policymakers and scholars describe how military AI is about to escalate into a strategic arms race between major powers such as the United States and China, similar to the nuclear arms race during the Cold War (cf. Petman 2017 ; Thompson and Bremmer 2018 ; Maas 2019a ). The process in the CCW Group of Governmental Experts on the regulation of LAWS is probably the largest single negotiation on AI ( Maas 2019b ) next to the negotiations on the EU AI Act. The zero-sum logic that appears to exist between states in the area of national security, prompting a military AI arms race, may not be applicable to the same extent to non-military applications of AI, potentially enabling a clearer focus on realizing positive-sum gains through regulation.

These five dimensions can provide guidance as researchers take up the task of mapping and categorizing global AI regulation. While the evidence is preliminary, in its present form, the global governance of AI must be understood as combining horizontal and vertical elements, predominantly leaning toward soft law, being heavily decentralized, primarily public in nature, and mixing military and non-military regulation. This multi-faceted and non-hierarchical nature of global AI governance suggests that it is best characterized as a regime complex , or a “larger web of international rules and regimes” ( Alter and Meunier 2009 , 13; Keohane and Victor 2011 ) rather than as a single, discrete regime.

If global AI governance can be understood as a regime complex, which some researchers already claim ( Cihon et al. 2020a ), future scholarship should look for theoretical and methodological inspiration in research on regime complexity in other policy fields. This research has found that regime complexes are characterized by path dependence, as existing rules shape the formulation of new rules; venue shopping, as actors seek to steer regulatory efforts to the fora most advantageous to their interests; and legal inconsistencies, as rules emerge from fractious and overlapping negotiations in parallel processes ( Raustiala and Victor 2004 ). Scholars have also considered the design of regime complexes ( Eilstrup-Sangiovanni and Westerwinter 2021 ), institutional overlap among bodies in regime complexes ( Haftel and Lenz 2021 ), and actors’ forum-shopping within regime complexes ( Verdier 2022 ). Establishing whether these patterns and dynamics are key features also of the AI regime complex stands out as an important priority in future research.

Explaining AI Governance

As our understanding of the empirical patterns of global AI governance grows, a natural next step is to turn to explanatory questions. How may we explain the emerging global governance of AI? What accounts for variation in governance arrangements and how do they compare with those in other policy fields, such as environment, security, or trade? Political science and IR offer a plethora of useful theoretical tools that can provide insights into the global governance of AI. However, at the same time, the novelty of AI as a governance challenge raises new questions that may require novel or refined theories. Thus far, existing research on the global governance of AI has been primarily concerned with descriptive tasks and largely fallen short in engaging with explanatory questions.

We illustrate the potential of general theories to help explain global AI governance by pointing to three broad explanatory perspectives in IR ( Martin and Simmons 2012 )—power, interests, and ideas—which have served as primary sources of theorizing on global governance arrangements in other policy fields. These perspectives have conventionally been associated with the paradigmatic theories of realism, liberalism, and constructivism, respectively, but like much of the contemporary IR discipline, we prefer to formulate them as non-paradigmatic sources for mid-level theorizing of more specific phenomena (cf. Lake 2013 ). We focus our discussion on how accounts privileging power, interests, and ideas have explained the origins and designs of IOs and how they may help us explain wider patterns of global AI governance. We then discuss how theories of non-state actors and regime complexity, in particular, offer promising avenues for future research into the global governance of AI. Research fields like science and technology studies (e.g., Jasanoff 2016 ) or the political economy of international cooperation (e.g., Gilpin 1987 ) can provide additional theoretical insights, but these literatures are not discussed in detail here.

A first broad explanatory perspective is provided by power-centric theories, privileging the role of major states, capability differentials, and distributive concerns. While conventional realism emphasizes how states’ concern for relative gains impedes substantive international cooperation, viewing IOs as epiphenomenal reflections of underlying power relations ( Mearsheimer 1994 ), developed power-oriented theories have highlighted how powerful states seek to design regulatory contexts that favor their preferred outcomes ( Gruber 2000 ) or shape the direction of IOs using informal influence ( Stone 2011 ; Dreher et al. 2022 ).

In research on global AI governance, power-oriented perspectives are likely to prove particularly fruitful in investigating how great-power contestation shapes where and how the technology will be regulated. Focusing on the major AI powerhouses, scholars have started to analyze the contrasting regulatory strategies and policies of the United States, China, and the EU, often emphasizing issues of strategic competition, military balance, and rivalry ( Kania 2017 ; Horowitz et al. 2018 ; Payne 2018 , 2021 ; Johnson 2019 ; Jensen et al. 2020 ). Here, power-centric theories could help understand the apparent emphasis on military AI in both the United States and China, as witnessed by the recent establishment of a US National Security Commission on AI and China’s ambitious plans of integrating AI into its military forces ( Ding 2018 ). The EU, for its part, is negotiating the comprehensive AI Act, seeking to use its market power to set a European standard for AI that subsequently can become the global standard, as it previously did with its GDPR law on data protection and privacy ( Schmitt 2021 ). Given the primacy of these three actors in AI development, their preferences and outlook regarding regulatory solutions will remain a key research priority.

Power-based accounts are also likely to provide theoretical inspiration for research on AI governance in the domain of security and military competition. Some scholars are seeking to assess the implications of AI for strategic rivalries, and their possible regulation, by drawing on historical analogies ( Leung 2019 ; see also Drezner 2019 ). Observing that, from a strategic standpoint, military AI exhibits some similarities to the problems posed by nuclear weapons, researchers have examined whether lessons from nuclear arms control have applicability in the domain of AI governance. For example, Maas (2019a ) argues that historical experience suggests that the proliferation of military AI can potentially be slowed down via institutionalization, while Zaidi and Dafoe (2021 ), in a study of the Baruch Plan for Nuclear Weapons, contend that fundamental strategic obstacles—including mistrust and fear of exploitation by other states—need to be overcome to make regulation viable. This line of investigation can be extended by assessing other historical analogies, such as the negotiations that led to the Strategic Arms Limitation Talks (SALT) in 1972 or more recent efforts to contain the spread of nuclear weapons, where power-oriented factors have shown continued analytical relevance (e.g., Ruzicka 2018 ).

A second major explanatory approach is provided by the family of theoretical accounts that highlight how international cooperation is shaped by shared interests and functional needs ( Keohane 1984 ; Martin 1992 ). A key argument in rational functionalist scholarship is that states are likely to establish IOs to overcome barriers to cooperation—such as information asymmetries, commitment problems, and transaction costs—and that the design of these institutions will reflect the underlying problem structure, including the degree of uncertainty and the number of involved actors (e.g., Koremenos et al. 2001 ; Hawkins et al. 2006 ; Koremenos 2016 ).

Applied to the domain of AI, these approaches would bring attention to how the functional characteristics of AI as a governance problem shape the regulatory response. They would also emphasize the investigation of the distribution of interests and the possibility of efficiency gains from cooperation around AI governance. The contemporary proliferation of partnerships and initiatives on AI governance points to the suitability of this theoretical approach, and research has taken some preliminary steps, surveying state interests and their alignment (e.g., Campbell 2019 ; Radu 2021 ). However, a systematic assessment of how the distribution of interests would explain the nature of emerging governance arrangements, both in the aggregate and at the constituent level, has yet to be undertaken.

A third broad explanatory perspective is provided by theories emphasizing the role of history, norms, and ideas in shaping global governance arrangements. In contrast to accounts based on power and interests, this line of scholarship, often drawing on sociological assumptions and theory, focuses on how institutional arrangements are embedded in a wider ideational context, which itself is subject to change. This perspective has generated powerful analyses of how societal norms influence states’ international behavior (e.g., Acharya and Johnston 2007 ), how norm entrepreneurs play an active role in shaping the origins and diffusion of specific norms (e.g., Finnemore and Sikkink 1998 ), and how IOs socialize states and other actors into specific norms and behaviors (e.g., Checkel 2005 ).

Examining the extent to which domestic and societal norms shape discussions on global governance arrangements stands out as a particularly promising area of inquiry. Comparative research on national ethical standards for AI has already indicated significant cross-country convergence, indicating a cluster of normative principles that are likely to inspire governance frameworks in many parts of the world (e.g., Jobin et al. 2019 ). A closely related research agenda concerns norm entrepreneurship in AI governance. Here, preliminary findings suggest that civil society organizations have played a role in advocating norms relating to fundamental rights in the formulation of EU AI policy and other processes ( Ulnicane 2021 ). Finally, once AI governance structures have solidified further, scholars can begin to draw on norms-oriented scholarship to design strategies for the analysis of how those governance arrangements may play a role in socialization.

In light of the particularities of AI and its political landscape, we expect that global governance scholars will be motivated to refine and adapt these broad theoretical perspectives to address new questions and conditions. For example, considering China’s AI sector-specific resources and expertise, power-oriented theories will need to grapple with questions of institutional creation and modification occurring under a distribution of power that differs significantly from the Western-centric processes that underpin most existing studies. Similarly, rational functionalist scholars will need to adapt their tools to address questions of how the highly asymmetric distribution of AI capabilities—in particular between producers, which are few, concentrated, and highly resourced, and users and subjects, which are many, dispersed, and less resourced—affects the formation of state interests and bargaining around institutional solutions. For their part, norm-oriented theories may need to be refined to capture the role of previously understudied sources of normative and ideational content, such as formal and informal networks of computer programmers, which, on account of their expertise, have been influential in setting the direction of norms surrounding several AI technologies.

We expect that these broad theoretical perspectives will continue to inspire research on the global governance of AI, in particular for tailored, mid-level theorizing in response to new questions. However, a fully developed research agenda will gain from complementing these theories, which emphasize particular independent variables (power, interests, and norms), with theories and approaches that focus on particular issues, actors, and phenomena. There is an abundance of theoretical perspectives that can be helpful in this regard, including research on the relationship between science and politics ( Haas 1992 ; Jasanoff 2016 ), the political economy of international cooperation ( Gilpin 1987 ; Frieden et al. 2017 ), the complexity of global governance ( Raustiala and Victor 2004 ; Eilstrup-Sangiovanni and Westerwinter 2021 ), and the role of non-state actors ( Risse 2012 ; Tallberg et al. 2013 ). We focus here on the latter two: theories of regime complexity, which have grown to become a mainstream approach in global governance scholarship, as well as theories of non-state actors, which provide powerful tools for understanding how private organizations influence regulatory processes. Both literatures hold considerable promise in advancing scholarship of AI global governance beyond its current state.

As concluded above, the current structure of global AI governance fits the description of a regime complex. Thus, approaching AI governance through this theoretical lens, understanding it as a larger web of rules and regulations, can open new avenues of research (see Maas 2021 for a pioneering effort). One priority is to analyze the AI regime complex in terms of core dimensions, such as scale, diversity, and density ( Eilstrup-Sangiovanni and Westerwinter 2021 ). Pointing to the density of this regime complex, existing studies have suggested that global AI governance is characterized by a high degree of fragmentation ( Schmitt 2021 ), which has motivated assessments of the possibility of greater centralization ( Cihon et al. 2020b ). Another area of research is to examine the emergence of legal inconsistencies and tensions, likely to emerge because of the diverging preferences of major AI players and the tendency of self-interest actors to forum-shop when engaging within a regime complex. Finally, given that the AI regime complex exists in a very early state, it provides researchers with an excellent opportunity to trace the origins and evolution of this form of governance structure from the outset, thus providing a good case for both theory development and novel empirical applications.

If theories of regime complexity can shine a light on macro-level properties of AI governance, other theoretical approaches can guide research into micro-level dynamics and influences. Recognizing that non-state actors are central in both AI development and its emergent regulation, researchers should find inspiration in theories and tools developed to study the role and influence of non-state actors in global governance (for overviews, see Risse 2012 ; Jönsson and Tallberg forthcoming ). Drawing on such work will enable researchers to assess to what extent non-state actor involvement in the AI regime complex differs from previous experiences in other international regimes. It is clear that large tech companies, like Google, Meta, and Microsoft, have formed regulatory preferences and that their monetary resources and technological expertise enable them to promote these interests in legislative and bureaucratic processes. For example, the Partnership on AI (PAI), a multistakeholder organization with more than 50 members, includes American tech companies at the forefront of AI development and fosters research on issues of AI ethics and governance ( Schmitt 2021 ). Other non-state actors, including civil society watchdog organizations, like the Civil Liberties Union for Europe, have been vocal in the negotiations of the EU AI Act, further underlining the relevance of this strand of research.

When investigating the role of non-state actors in the AI regime complex, research may be guided by four primary questions. A first question concerns the interests of non-state actors regarding alternative AI global governance architectures. Here, a survey by Chavannes et al. (2020 ) on possible regulatory approaches to LAWS suggests that private companies developing AI applications have interests that differ from those of civil society organizations. Others have pointed to the role of actors rooted in research and academia who have sought to influence the development of AI ethics guidelines ( Zhu 2022 ). A second question is to what extent the regulatory institutions and processes are accessible to the aforementioned non-state actors in the first place. Are non-state actors given formal or informal opportunities to be substantively involved in the development of new global AI rules? Research points to a broad and comprehensive opening up of IOs over the past two decades ( Tallberg et al. 2013 ) and, in the domain of AI governance, early indications are that non-state actors have been granted access to several multilateral processes, including in the OECD and the EU (cf. Niklas and Dencik 2021 ). A third question concerns actual participation: Are non-state actors really making use of the opportunities to participate, and what determines the patterns of participation? In this vein, previous research has suggested that the participation of non-state actors is largely dependent on their financial resources ( Uhre 2014 ) or the political regime of their home country ( Hanegraaff et al. 2015 ). In the context of AI governance, this raises questions about if and how the vast resource disparities and divergent interests between private tech corporations and civil society organizations may bias patterns of participation. There is, for instance, research suggesting that private companies are contributing to a practice of ethics washing by committing to nonbinding ethical guidelines while circumventing regulation ( Wagner 2018 ; Jobin et al. 2019 ; Rességuier and Rodrigues 2020 ). Finally, a fourth question is to what extent, and how, non-state actors exert influence on adopted AI rules. Existing scholarship suggests that non-state actors typically seek to shape the direction of international cooperation via lobbying ( Dellmuth and Tallberg 2017 ), while others have argued that non-state actors use participation in international processes largely to expand or sustain their own resources ( Hanegraaff et al. 2016 ).

The previous section suggested that emerging global initiatives to regulate AI amount to a regime complex and that an empirical approach could help to map and explain these regulatory developments. In this section, we move beyond positive empirical questions to consider the normative concerns at stake in the global governance of AI. We argue that normative theorizing is needed both for assessing how well existing arrangements live up to ideals such as democracy and justice and for evaluating how best to specify what these ideals entail for the global governance of AI.

Ethical values frequently highlighted in the context of AI governance include transparency, inclusion, accountability, participation, deliberation, fairness, and beneficence ( Floridi et al. 2018 ; Jobin et al. 2019 ). A normative perspective suggests several ways in which to theorize and analyze such values in relation to the global governance of AI. One type of normative analysis focuses on application, that is, on applying an existing normative theory to instances of AI governance, assessing how well such regulatory arrangements realize their principles (similar to how political theorists have evaluated whether global governance lives up to standards of deliberation; see Dryzek 2011 ; Steffek and Nanz 2008 ). Such an analysis could also be pursued more narrowly by using a certain normative theory to assess the implications of AI technologies, for instance, by approaching the problem of algorithmic bias based on notions of fairness or justice ( Vredenburgh 2022 ). Another type of normative analysis moves from application to justification, analyzing the structure of global AI governance with the aim of theory construction. In this type of analysis, the goal is to construe and evaluate candidate principles for these regulatory arrangements in order to arrive at the best possible (most justified) normative theory. In this case, the theorist starts out from a normative ideal broadly construed (concept) and arrives at specific principles (conception).

In the remainder of this section, we will point to the promises of analyzing global AI governance based on the second approach. We will focus specifically on the normative ideals of justice and democracy. While many normative ideals could serve as focal points for an analysis of the AI domain, democracy and justice appear particularly central for understanding the normative implications of the governance of AI. Previous efforts to deploy political philosophy to shed light on normative aspects of global governance point to the promise of this focus (e.g., Caney 2005 , 2014 ; Buchanan 2013 ). It is also natural to focus on justice and democracy given that many of the values emphasized in AI ethics and existing ethics guidelines are analytically close to justice and democracy. Our core argument will be that normative research needs to be attentive to how these ideals would be best specified in relation to both the procedures and outcomes of the global governance of AI.

AI Ethics and the Normative Analysis of Global AI Governance

Although there is a rich literature on moral or ethical aspects related to specific AI applications, investigations into normative aspects of global AI governance are surprisingly sparse (for exceptions, see Müller 2020 ; Erman and Furendal 2022a , 2022b ). Researchers have so far focused mostly on normative and ethical questions raised by AI considered as a tool, enabling, for example, autonomous weapons systems ( Sparrow 2007 ) and new forms of political manipulation ( Susser et al. 2019 ; Christiano 2021 ). Some have also considered AI as a moral agent of its own, focusing on how we could govern, or be governed by, a hypothetical future artificial general intelligence ( Schwitzgebel and Garza 2015 ; Livingston and Risse 2019 ; cf. Tasioulas 2019 ; Bostrom et al. 2020 ; Erman and Furendal 2022a ). Examples such as these illustrate that there is, by now, a vibrant field of “AI ethics” that aims to consider normative aspects of specific AI applications.

As we have shown above, however, initiatives to regulate AI beyond the nation-state have become increasingly common, and they are often led by IOs, multinational companies, private standardization bodies, and civil society organizations. These developments raise normative issues that require a shift from AI ethics in general to systematic analyses of the implications of global AI governance. It is crucial to explore these normative dimensions of how AI is governed, since how AI is governed invokes key normative questions pertaining to the ideals that ought to be met.

Apart from attempts to map or describe the central norms in the existing global governance of AI (cf. Jobin et al.), most normative analyses of the global governance of AI can be said to have proceeded in two different ways. The dominant approach is to employ an outcome-based focus ( Dafoe 2018 ; Winfield et al. 2019 ; Taeihagh 2021 ), which starts by identifying a potential problem or promise created by AI technology and then seeks to identify governance mechanisms or principles that can minimize risks or make a desired outcome more likely. This approach can be contrasted with a procedure-based focus, which attaches comparatively more weight to how governance processes happen in existing or hypothetical regulatory arrangements. It recognizes that there are certain procedural aspects that are important and might be overlooked by an analysis that primarily assesses outcomes.

The benefits of this distinction become apparent if we focus on the ideals of justice and democracy. Broadly construed, we understand justice as an ideal for how to distribute benefits and burdens—specifying principles that determine “who owes what to whom”—and democracy as an ideal for collective decision-making and the exercise of political power—specifying principles that determine “who has political power over whom” ( Barry 1991 ; Weale 1999 ; Buchanan and Keohane 2006 ; Christiano 2008 ; Valentini 2012 , 2013 ). These two ideals can be analyzed with a focus on procedure or outcome, producing four fruitful avenues of normative research into global AI governance. First, justice could be understood as a procedural value or as a distributive outcome. Second, and likewise, democracy could be a feature of governance processes or an outcome of those processes. Below, we discuss existing research from the standpoint of each of these four avenues. We conclude that there is great potential for novel insights if normative theorists consider the relatively overlooked issues of outcome aspects of justice and procedural aspects of democracy in the global governance of AI.

Procedural and Outcome Aspects of Justice

Discussions around the implications of AI applications on justice, or fairness, are predominantly concerned with procedural aspects of how AI systems operate. For instance, ever since the problem of algorithmic bias—i.e., the tendency that AI-based decision-making reflects and exacerbates existing biases toward certain groups—was brought to public attention, AI ethicists have offered suggestions of why this is wrong, and AI developers have sought to construct AI systems that treat people “fairly” and thus produce “justice.” In this context, fairness and justice are understood as procedural ideals, which AI decision-making frustrates when it fails to treat like cases alike, and instead systematically treats individuals from different groups differently ( Fazelpour and Danks 2021 ; Zimmermann and Lee-Stronach 2022 ). Paradigmatic examples include automated predictions about recidivism among prisoners that have impacted decisions about people’s parole and algorithms used in recruitment that have systematically favored men over women ( Angwin et al. 2016 ; O'Neil 2017 ).

However, the emerging global governance of AI also has implications for how the benefits and burdens of AI technology are distributed among groups and states—i.e., outcomes ( Gilpin 1987 ; Dreher and Lang 2019 ). Like the regulation of earlier technological innovations ( Krasner 1991 ; Drezner 2019 ), AI governance may not only produce collective benefits, but also favor certain actors at the expense of others ( Dafoe 2018 ; Horowitz 2018 ). For instance, the concern about AI-driven automation and its impact on employment is that those who lose their jobs because of AI might carry a disproportionately large share of the negative externalities of the technology without being compensated through access to its benefits (cf. Korinek and Stiglitz 2019 ; Erman and Furendal 2022a ). Merely focusing on justice as a procedural value would overlook such distributive effects created by the diffusion of AI technology.

Moreover, this example illustrates that since AI adoption may produce effects throughout the global economy, regulatory efforts will have to go beyond issues relating to the technology itself. Recognizing the role of outcomes of AI governance entails that a broad range of policies need to be pursued by existing and emerging governance regimes. The global trade regime, for instance, may need to be reconsidered in order for the distribution of positive and negative externalities of AI technology to be just. Suggestions include pursuing policies that can incentivize certain kinds of AI technology or enable the profits gained by AI developers to be shared more widely (cf. Floridi et al. 2018 ; Erman and Furendal 2022a ).

In sum, with regard to outcome aspects of justice, theories are needed to settle which benefits and burdens created by global AI adoption ought to be fairly distributed and why (i.e., what the “site” and “scope” of AI justice are) (cf. Gabriel 2022 ). Similarly, theories of procedural aspects should look beyond individual applications of AI technology and ask whether a fairer distribution of influence over AI governance may help produce more fair outcomes, and if so how. Extending existing theories of distributive justice to the realm of global AI governance may put many of their central assumptions in a new light.

Procedural and Outcome Aspects of Democracy

Normative research could also fruitfully shed light on how emerging AI governance should be analyzed in relation to the ideal of democracy, such as what principles or criteria of democratic legitimacy are most defensible. It could be argued, for instance, that the decision process must be open to democratic influence for global AI governance to be democratically legitimate ( Erman and Furendal 2022b ). Here, normative theory can explain why it matters from the standpoint of democracy whether the affected public has had a say—either directly through open consultation or indirectly through representation—in formulating the principles that guide AI governance. The nature of the emerging AI regime complex—where prominent roles are held by multinational companies and private standard-setting bodies—suggests that it is far from certain that the public will have this kind of influence.

Importantly, it is likely that democratic procedures will take on different shapes in global governance compared to domestic politics ( Dahl 1999 ; Scholte 2011 ). A viable democratic theory must therefore make sense of how the unique properties of global governance raise issues or require solutions that are distinct from those in the domestic context. For example, the prominent influence of non-state actors, including the large tech corporations developing cutting-edge AI technology, suggests that it is imperative to ask whether different kinds of decision-making may require different normative standards and whether different kinds of actors may have different normative status in such decision-making arrangements.

Initiatives from non-state actors, such as the tech company-led PAI discussed above, often develop their own non-coercive ethics guidelines. Such documents may seek effects similar to coercively upheld regulation, such as the GDPR or the EU AI Act. For example, both Google and the EU specify that AI should not reinforce biases ( High-Level Expert Group on Artificial Intelligence 2019 ; Google 2022 ). However, from the perspective of democratic legitimacy, it may matter extensively which type of entity adopts AI regulations and on what grounds those decision-making entities have the authority to issue AI regulations ( Erman and Furendal 2022b ).

Apart from procedural aspects, a satisfying democratic theory of global AI governance will also have to include a systematic analysis of outcome aspects. Important outcome aspects of democracy include accountability and responsiveness. Accountability may be improved, for example, by instituting mechanisms to prevent corruption among decision-makers and to secure public access to governing documents, and responsiveness may be improved by strengthening the discursive quality of global decision processes, for instance, by involving international NGOs and civil movements that give voice to marginalized groups in society. With regard to tracing citizens’ preferences, some have argued that democratic decision-making can be enhanced by AI technology that tracks what people want and consistently reach “better” decisions than human decision-makers (cf. König and Wenzelburger 2022 ). Apart from accountability and responsiveness, other relevant outcome aspects of democracy include, for example, the tendency to promote conflict resolution, improve the epistemic quality of decisions, and dignity and equality among citizens.

In addition, it is important to analyze how procedural and outcome concerns are related. This issue is often neglected, which again can be illustrated by the ethics guidelines from IOs, such as the OECD Principles on Artificial Intelligence and the UNESCO Recommendation on Ethics of AI. Such documents often stress the importance of democratic values and principles, such as transparency, accountability, participation, and deliberation. Yet they typically treat these values as discrete and rarely explain how they are interconnected ( Jobin et al. 2019 ; Schiff et al. 2020 ; Hagendorff 2020 , 103). Democratic theory can fruitfully step in to explain how the ideal of “the rule by the people” includes two sides that are intimately connected. First, there is an access side of political power, where those affected should have a say in the decision-making, which might require participation, deliberation, and political equality. Second, there is an exercise side of political power, where those very decisions should apply in appropriate ways, which in turn might require effectiveness, transparency, and accountability. In addition to efforts to map and explain norms and values in the global governance of AI, theories of democratic AI governance can hence help explain how these two aspects are connected (cf. Erman 2020 ).

In sum, the global governance of AI raises a number of issues for normative research. We have identified four promising avenues, focused on procedural and outcome aspects of justice and democracy in the context of global AI governance. Research along these four avenues can help to shed light on the normative challenges facing the global governance of AI and the key values at stake, as well as provide the impetus for novel theories on democratic and just global AI governance.

This article has charted a new agenda for research into the global governance of AI. While existing scholarship has been primarily descriptive or policy-oriented, we propose an agenda organized around theory-driven positive and normative questions. To this end, we have outlined two broad analytical perspectives on the global governance of AI: an empirical approach, aimed at conceptualizing and explaining global AI governance; and a normative approach, aimed at developing and applying ideals for appropriate global AI governance. Pursuing these empirical and normative approaches can help to guide future scholarship on the global governance of AI toward critical questions, core concepts, and promising theories. At the same time, exploring AI as a regulatory issue provides an opportunity to further develop these general analytical approaches as they confront the particularities of this important area of governance.

We conclude this article by highlighting the key takeaways from this research agenda for future scholarship on empirical and normative dimensions of the global governance of AI. First, research is required to identify where and how AI is becoming globally governed . Mapping and conceptualizing the emerging global governance of AI is a first necessary step. We argue that research may benefit from considering the variety of ways in which new regulation may come about, from the reinterpretation of existing rules and the extension of prevailing sectoral governance to the negotiation of entirely new frameworks. In addition, we suggest that scholarship may benefit from considering how global AI governance may be conceptualized in terms of key analytical dimensions, such as horizontal–vertical, centralized–decentralized, and formal–informal.

Second, research is necessary to explain why AI is becoming globally governed in particular ways . Having mapped global AI governance, we need to account for the factors that drive and shape these regulatory processes and arrangements. We argue that political science and IR offer a variety of theoretical tools that can help to explain the global governance of AI. In particular, we highlight the promise of theories privileging the role of power, interests, ideas, regime complexes, and non-state actors, but also recognize that research fields such as science and technology studies and political economy can yield additional theoretical insights.

Third, research is needed to identify what normative ideals global AI governance ought to meet . Moving from positive to normative issues, a first critical question pertains to the ideals that should guide the design of appropriate global AI governance. We argue that normative theory provides the tools necessary to engage with this question. While normative theory can suggest several potential principles, we believe that it may be especially fruitful to start from the ideals of democracy and justice, which are foundational and recurrent concerns in discussions about political governing arrangements. In addition, we suggest that these two ideals are relevant both for the procedures by which AI regulation is adopted and for the outcomes of such regulation.

Fourth, research is required to evaluate how well global AI governance lives up to these normative ideals . Once appropriate normative ideals have been selected, we can assess to what extent and how existing arrangements conform to these principles. We argue that previous research on democracy and justice in global governance offers a model in this respect. A critical component of such research is the integration of normative and empirical research: normative research for elucidating how normative ideals would be expressed in practice, and empirical research for analyzing data on whether actual arrangements live up to those ideals.

In all, the research agenda that we outline should be of interest to multiple audiences. For students of political science and IR, it offers an opportunity to apply and refine concepts and theories in a novel area of global governance of extensive future importance. For scholars of AI, it provides an opportunity to understand how political actors and considerations shape the conditions under which AI applications may be developed and used. For policymakers, it presents an opportunity to learn about evolving regulatory practices and gaps, interests shaping emerging arrangements, and trade-offs to be confronted in future efforts to govern AI at the global level.

A previous version of this article was presented at the Global and Regional Governance workshop at Stockholm University. We are grateful to Tim Bartley, Niklas Bremberg, Lisa Dellmuth, Felicitas Fritzsche, Faradj Koliev, Rickard Söder, Carl Vikberg, Johanna von Bahr, and three anonymous reviewers for ISR for insightful comments and suggestions. The research for this article was funded by the WASP-HS program of the Marianne and Marcus Wallenberg Foundation (Grant no. MMW 2020.0044).

We use “global governance” to refer to regulatory processes beyond the nation-state, whether on a global or regional level. While states and IOs often are central to these regulatory processes, global governance also involves various types of non-state actors ( Rosenau 1999 ).

Abbott Kenneth W. , and Snidal Duncan . 2000 . “ Hard and Soft Law in International Governance .” International Organization . 54 ( 3 ): 421 – 56 .

Google Scholar

Acemoglu Daron , and Restrepo Pascual . 2020 . “ The Wrong Kind of AI? Artificial Intelligence and the Future of Labour Demand .” Cambridge Journal of Regions, Economy and Society . 13 ( 1 ): 25 – 35 .

Acharya Amitav , and Johnston Alistair Iain . 2007 . “ Conclusion: Institutional Features, Cooperation Effects, and the Agenda for Further Research on Comparative Regionalism .” In Crafting Cooperation: Regional International Institutions in Comparative Perspective , edited by Acharya Amitav , Johnston Alistair Iain , 244 – 78 .. Cambridge : Cambridge University Press .

Google Preview

Alter Karen J. , and Meunier Sophie . 2009 . “ The Politics of International Regime Complexity .” Perspectives on Politics . 7 ( 1 ): 13 – 24 .

Angwin Julia , Larson Jeff , Mattu Surya , and Kirchner Lauren . 2016 . “ Machine Bias .” ProPublica , May 23 . Internet (last accessed August 25, 2023): https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing .

Barry Brian . 1991 . “ Humanity and Justice in Global Perspective .” In Liberty and Justice , edited by Barry Brian . Oxford : Clarendon .

Berk Richard A . 2021 . “ Artificial Intelligence, Predictive Policing, and Risk Assessment for Law Enforcement .” Annual Review of Criminology . 4 ( 1 ): 209 – 37 .

Bostrom Nick , Dafoe Allan , and Flynn Carrick . 2020 . “ Public Policy and Superintelligent AI: A Vector Field Approach .” In Ethics of Artificial Intelligence , edited by Liao S. Matthew , 293 – 326 .. Oxford : Oxford University Press .

Buchanan Allen , and Keohane Robert O. . 2006 . “ The Legitimacy of Global Governance Institutions .” Ethics & International Affairs . 20 (4) : 405 – 37 .

Buchanan Allen . 2013 . The Heart of Human Rights . Oxford : Oxford University Press .

Burrell Jenna . 2016 . “ How the Machine “Thinks”: Understanding Opacity in Machine Learning Algorithms .” Big Data & Society . 3 ( 1 ): 1 – 12 .. https://doi.org/10.1177/2053951715622512 .

Burri Thomas . 2017 . “ International Law and Artificial Intelligence .” In German Yearbook of International Law , vol. 60 , 91 – 108 .. Berlin : Duncker and Humblot .

Butcher James , and Beridze Irakli . 2019 . “ What is the State of Artificial Intelligence Governance Globally?” . The RUSI Journal . 164 ( 5-6 ): 88 – 96 .

Büthe Tim , Djeffal Christian , Lütge Christoph , Maasen Sabine , and von Ingersleben-Seip Nora . 2022 . “ Governing AI—Attempting to Herd Cats? Introduction to the Special Issue on the Governance of Artificial Intelligence .” Journal of European Public Policy . 29 ( 11 ): 1721 – 52 .

Campbell Thomas A . 2019 . Artificial Intelligence: An Overview of State Initiatives . Evergreen, CO : FutureGrasp .

Caney Simon . 2005 . “ Cosmopolitan Justice, Responsibility, and Global Climate Change .” Leiden Journal of International Law . 18 ( 4 ): 747 – 75 .

Caney Simon . 2014 . “ Two Kinds of Climate Justice: Avoiding Harm and Sharing Burdens .” Journal of Political Philosophy . 22 ( 2 ): 125 – 49 .

Chavannes Esther , Klonowska Klaudia , and Sweijs Tim . 2020 . Governing Autonomous Weapon Systems: Expanding the Solution Space, From Scoping to Applying . The Hague : The Hague Center for Strategic Studies .

Checkel Jeffrey T . 2005 . “ International Institutions and Socialization in Europe: Introduction and Framework .” International organization . 59 ( 4 ): 801 – 26 .

Christiano Thomas . 2008 . The Constitution of Equality . Oxford : Oxford University Press .

Christiano Thomas . 2021 . “ Algorithms, Manipulation, and Democracy .” Canadian Journal of Philosophy . 52 ( 1 ): 109 – 124 .. https://doi.org/10.1017/can.2021.29 .

Cihon Peter , Maas Matthijs M. , and Kemp Luke . 2020a . “ Fragmentation and the Future: Investigating Architectures for International AI Governance .” Global Policy . 11 ( 5 ): 545 – 56 .

Cihon Peter , Maas Matthijs M. , and Kemp Luke . 2020b . “ Should Artificial Intelligence Governance Be Centralised? Design Lessons from History .” In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society , 228 – 34 . New York, NY: ACM .

Council of Europe . 2023 . “ AI Initiatives ,” accessed 16 June 2023, AI initiatives (coe.int).

Dafoe Allan . 2018 . AI Governance: A Research Agenda . Oxford: Governance of AI Program , Future of Humanity Institute, University of Oxford . www.fhi.ox.ac.uk/govaiagenda .

Dahl Robert . 1999 . “ Can International Organizations Be Democratic: A Skeptic's View .” In Democracy's Edges , edited by Shapiro Ian , Hacker-Córdon Casiano , 19 – 36 .. Cambridge : Cambridge University Press .

Dellmuth Lisa M. , and Tallberg Jonas . 2017 . “ Advocacy Strategies in Global Governance: Inside versus Outside Lobbying .” Political Studies . 65 ( 3 ): 705 – 23 .

Ding Jeffrey . 2018 . Deciphering China's AI Dream: The Context, Components, Capabilities and Consequences of China's Strategy to Lead the World in AI . Oxford: Centre for the Governance of AI , Future of Humanity Institute, University of Oxford .

Dreher Axel , and Lang Valentin . 2019 . “ The Political Economy of International Organizations .” In The Oxford Handbook of Public Choice , Volume 2, edited by Congleton Roger O. , Grofman Bernhard , Voigt Stefan . Oxford : Oxford University Press .

Dreher Axel , Lang Valentin , Rosendorff B. Peter , and Vreeland James R. . 2022 . “ Bilateral or Multilateral? International Financial Flows and the Dirty Work-Hypothesis .” The Journal of Politics . 84 ( 4 ): 1932 – 1946 .

Drezner Daniel W . 2019 . “ Technological Change and International Relations .” International Relations . 33 ( 2 ): 286 – 303 .

Dryzek John . 2011 . “ Global Democratization: Soup, Society, or System? ” Ethics & International Affairs , 25 ( 2 ): 211 – 234 .

Ebers Martin . 2022 . “ Explainable AI in the European Union: An Overview of the Current Legal Framework(s) .” In Nordic Yearbook of Law and Informatics 2020–2021: Law in the Era of Artificial Intelligence , edited by Lianne Colonna and Stanley Greenstein . Stockholm: The Swedish Law and Informatics Institute, Stockholm University .

Eilstrup-Sangiovanni Mette , and Westerwinter Oliver . 2021 . “ The Global Governance Complexity Cube: Varieties of Institutional Complexity in Global Governance .” Review of International Organizations . 17 (2): 233 – 262 .

Erman Eva , and Furendal Markus . 2022a . “ The Global Governance of Artificial Intelligence: Some Normative Concerns .” Moral Philosophy & Politics . 9 (2): 267−291. https://www.degruyter.com/document/doi/10.1515/mopp-2020-0046/html .

Erman Eva , and Furendal Markus . 2022b . “ Artificial Intelligence and the Political Legitimacy of Global Governance .” Political Studies . https://journals.sagepub.com/doi/full/10.1177/00323217221126665 .

Erman Eva . 2020 . “ A Function-Sensitive Approach to the Political Legitimacy of Global Governance .” British Journal of Political Science . 50 ( 3 ): 1001 – 24 .

Fazelpour Sina , and Danks David . 2021 . “ Algorithmic Bias: Senses, Sources, Solutions .” Philosophy Compass . 16 ( 8 ): e12760.

Finnemore Martha , and Sikkink Kathryn . 1998 . “ International Norm Dynamics and Political Change .” International Organization . 52 ( 4 ): 887 – 917 .

Floridi Luciano , Cowls Josh , Beltrametti Monica , Chatila Raja , Chazerand Patrice , Dignum Virginia , Luetge Christoph et al.  2018 . “ AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations .” Minds and Machines . 28 ( 4 ): 689 – 707 .

Frey Carl Benedikt . 2019 . The Technology Trap: Capital, Labor, and Power in the Age of Automation . Princeton, NJ : Princeton University Press .

Frieden Jeffry , Lake David A. , and Lawrence Broz J. 2017 . International Political Economy: Perspectives on Global Power and Wealth . Sixth Edition. New York, NY : W.W. Norton .

Future of Life Institute , 2023 . “ Pause Giant AI Experiments: An Open Letter .” Accessed June 13, 2023. https://futureoflife.org/open-letter/pause-giant-ai-experiments/ .

Gabriel Iason , 2022 . “ Toward a Theory of Justice for Artificial Intelligence .” Daedalus , 151 ( 2 ): 218 – 31 .

Gilpin Robert . 1987 . The Political Economy of International Relations . Princeton, NJ : Princeton University Press .

Google . 2022 . “ Artificial Intelligence at Google: Our Principles .” Internet (last accessed August 25, 2023): https://ai.google/principles/ .

Greenstein Stanley . 2022 . “ Liability in the Era of Artificial Intelligence .” In Nordic Yearbook of Law and Informatics 2020–2021: Law in the Era of Artificial Intelligence , edited by Colonna Lianne , Greenstein Stanley . Stockholm: The Swedish Law and Informatics Institute, Stockholm University .

Gruber Lloyd . 2000 . Ruling the World . Princeton, NJ : Princeton University Press .

Haas Peter . 1992 . “ Introduction: Epistemic Communities and International Policy Coordination .” International Organization . 46 ( 1 ): 1 – 36 .

Haftel Yoram Z. , and Lenz Tobias . 2021 . “ Measuring Institutional Overlap in Global Governance .” Review of International Organizations . 17(2) : 323 – 347 .

Hagendorff Thilo . 2020 . “ The Ethics of AI Ethics: an Evaluation of Guidelines .” Minds and Machines . 30 ( 1 ): 99 – 120 .

Hanegraaff Marcel , Beyers Jan , and De Bruycker Iskander . 2016 . “ Balancing Inside and Outside Lobbying: The Political Strategy of Lobbyists at Global Diplomatic Conferences .” European Journal of Political Research . 55 ( 3 ): 568 – 88 .

Hanegraaff Marcel , Braun Caelesta , De Bièvre Dirk , and Beyers Jan . 2015 . “ The Domestic and Global Origins of Transnational Advocacy: Explaining Lobbying Presence During WTO Ministerial Conferences .” Comparative Political Studies . 48 : 1591 – 621 .

Hawkins Darren G. , Lake David A. , Nielson Daniel L. , Tierney Michael J. Eds. 2006 . Delegation and Agency in International Organizations . Cambridge : Cambridge University Press .

Heikkilä Melissa . 2022a . “ AI: Decoded. IoT Under Fire—Defining AI?—Meta's New AI Supercomputer .” Accessed June 5, 2022, https://www.politico.eu/newsletter/ai-decoded/iot-under-fire-defining-ai-metas-new-ai-supercomputer-2 /.

Heikkilä Melissa 2022b . “ AI: Decoded. A Dutch Algorithm Scandal Serves a Warning to Europe—The AI Act won't Save Us .” Accessed June 5, 2022, https://www.politico.eu/newsletter/ai-decoded/a-dutch-algorithm-scandal-serves-a-warning-to-europe-the-ai-act-wont-save-us-2/ .

Henriques-Gomes Luke . 2020 . “ Robodebt: Government Admits It Will Be Forced to Refund $550 m under Botched Scheme .” The Guardian . sec. Australia news . Internet (last accessed August 25, 2023): https://www.theguardian.com/australia-news/2020/mar/27/robodebt-government-admits-it-will-be-forced-to-refund-550m-under-botched-scheme .

Hernandez Joe . 2021 . “ A Military Drone With A Mind Of Its Own Was Used In Combat, U.N. Says .” National Public Radio . Internet (las accessed August 25, 2023): https://www.npr.org/2021/06/01/1002196245/a-u-n-report-suggests-libya-saw-the-first-battlefield-killing-by-an-autonomous-d .

High-Level Expert Group on Artificial Intelligence . 2019 . Ethics Guidelines for Trustworthy AI . Brussels: European Commission . https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai .

Horowitz Michael . 2018 . “ Artificial Intelligence, International Competition, and the Balance of Power .” Texas National Security Review . 1 ( 3 ): 37 – 57 .

Horowitz Michael C. , Allen Gregory C. , Kania Elsa B. , Scharre Paul . 2018 . Strategic Competition in an Era of Artificial Intelligence . Washington D.C. : Center for a New American Security .

Hu Krystal . 2023 . ChatGPT Sets Record for Fastest-Growing User Base—Analyst Note. Reuters , February 2, 2023, sec. Technology , Accessed June 12, 2023, https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/ .

Jasanoff Sheila . 2016 . The Ethics of Invention: Technology and the Human Future . New York : Norton .

Jensen Benjamin M. , Whyte Christopher , and Cuomo Scott . 2020 . “ Algorithms at War: the Promise, Peril, and Limits of Artificial Intelligence .” International Studies Review . 22 ( 3 ): 526 – 50 .

Jobin Anna , Ienca Marcello , and Vayena Effy . 2019 . “ The Global Landscape of AI Ethics Guidelines .” Nature Machine Intelligence . 1 ( 9 ): 389 – 99 .

Johnson J. 2019 . “ Artificial intelligence & Future Warfare: Implications for International Security .” Defense & Security Analysis . 35 ( 2 ): 147 – 69 .

Jönsson Christer , and Tallberg Jonas . Forthcoming. “Opening up to Civil Society: Access, Participation, and Impact .” In Handbook on Governance in International Organizations , edited by Edgar Alistair . Cheltenham : Edward Elgar Publishing .

Kania E. B . 2017 . Battlefield singularity. Artificial Intelligence, Military Revolution, and China's Future Military Power . Washington D.C.: CNAS .

Keohane Robert O . 1984 . After Hegemony . Princeton, NJ : Princeton University Press .

Keohane Robert O. , and Victor David G. . 2011 . “ The Regime Complex for Climate Change .” Perspectives on Politics . 9 ( 1 ): 7 – 23 .

König Pascal D. and Georg Wenzelburger 2022 . “ Between Technochauvinism and Human-Centrism: Can Algorithms Improve Decision-Making in Democratic Politics? ” European Political Science , 21 ( 1 ): 132 – 49 .

Koremenos Barbara , Lipson Charles , and Snidal Duncan . 2001 . “ The Rational Design of International Institutions .” International Organization . 55 ( 4 ): 761 – 99 .

Koremenos Barbara . 2016 . The Continent of International Law: Explaining Agreement Design . Cambridge : Cambridge University Press .

Korinek Anton and Stiglitz Joseph E. 2019 . “ Artificial Intelligence and Its Implications for Income Distribution and Unemployment ” In The Economics of Artificial Intelligence: An Agenda . edited by Agrawal A. , Gans J. and Goldfarb A. . University of Chicago Press . :.

Koven Levit Janet . 2007 . “ Bottom-Up International Lawmaking: Reflections on the New Haven School of International Law .” Yale Journal of International Law . 32 : 393 – 420 .

Krasner Stephen D . 1991 . “ Global Communications and National Power: Life on the Pareto Frontier .” World Politics . 43 ( 3 ): 336 – 66 .

Kunz Martina , and hÉigeartaigh Seán Ó . 2020 . “ Artificial Intelligence and Robotization .” In The Oxford Handbook on the International Law of Global Security , edited by Geiss Robin , Melzer Nils . Oxford : Oxford University Press .

Lake David. A . 2013 . “ Theory is Dead, Long Live Theory: The End of the Great Debates and the rise of eclecticism in International Relations .” European Journal of International Relations . 19 ( 3 ): 567 – 87 .

Leung Jade . 2019 . “ Who Will Govern Artificial Intelligence?” . Learning from the History of Strategic Politics in Emerging Technologies . Doctoral dissertation . Oxford: University of Oxford .

Livingston Steven , and Mathias Risse . 2019 , “ The Future Impact of Artificial Intelligence on Humans and Human Rights .” Ethics & International Affairs . 33 ( 2 ): 141 – 58 .

Maas Matthijs M . 2019a . “ How Viable is International Arms Control for Military Artificial Intelligence? Three Lessons from Nuclear Weapons .” Contemporary Security Policy . 40 ( 3 ): 285 – 311 .

Maas Matthijs M . 2019b . “ Innovation-proof Global Governance for Military Artificial Intelligence? How I Learned to Stop Worrying, and Love the Bot ,” Journal of International Humanitarian Legal Studies . 10 ( 1 ): 129 – 57 .

Maas Matthijs M . 2021 . Artificial Intelligence Governance under Change: Foundations, Facets, Frameworks . PhD dissertation . Copenhagen: University of Copenhagen .

Martin Lisa L . 1992 . “ Interests, Power, and Multilateralism .” International Organization . 46 ( 4 ): 765 – 92 .

Martin Lisa L. , and Simmons Beth A. . 2012 . “ International Organizations and Institutions .” In Handbook of International Relations , edited by Carlsnaes Walter , Risse Thomas , Simmons Beth A. . London : SAGE .

McCarthy John , Minsky Marvin L. , Rochester Nathaniel , and Shannon Claude E . 1955 . “ A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence .” AI Magazine . 27 ( 4 ): 12 – 14 (reprint) .

Mearsheimer John J. . 1994 . “ The False Promise of International Institutions .” International Security , 19 ( 3 ): 5 – 49 .

Metz Cade . 2022 . “ Lawsuit Takes Aim at the Way a.I. Is Built .” The New York Times , November 23, Accessed June 21, 2023. https://www.nytimes.com/2022/11/23/technology/copilot-microsoft-ai-lawsuit.html . June 21, 2023 .

Misuraca Gianluca , van Noordt Colin 2022 . “ Artificial Intelligence for the Public Sector: Results of Landscaping the Use of AI in Government Across the European Union .” Government Information Quarterly . 101714 . https://doi.org/10.1016/j.giq.2022.101714 .

Müller Vincent C . 2020 . “ Ethics of Artificial Intelligence and Robotics .” In Stanford Encyclopedia of Philosophy , edited by Zalta Edward N. Internet (last accessed August 25, 2023): https://plato.stanford.edu/archives/fall2020/entries/ethics-ai/ .

Niklas Jedrzen , Dencik Lina . 2021 . “ What Rights Matter? Examining the Place of Social Rights in the EU's Artificial Intelligence Policy Debate .” Internet Policy Review . 10 ( 3 ): 1 – 29 .

OECD . 2021 . “ OECD AI Policy Observatory .” Accessed February 17, 2022. https://oecd.ai .

O'Neil Cathy . 2017 . Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy . UK : Penguin Books .

Orakhelashvili Alexander . 2019 . Akehurst's Modern Introduction to International Law , Eighth Edition . London : Routledge .

Payne K . 2021 . I, Warbot: The Dawn of Artificially Intelligent Conflict . Oxford: Oxford University Press .

Payne Kenneth . 2018 . “ Artificial Intelligence: a Revolution in Strategic Affairs?” . Survival . 60 ( 5 ): 7 – 32 .

Petman Jarna . 2017 . Autonomous Weapons Systems and International Humanitarian Law: ‘Out of the Loop’ . Helsinki : The Eric Castren Institute of International Law and Human Rights .

Powers Thomas M. , and Ganascia Jean-Gabriel . 2020 . “ The Ethics of the Ethics of AI .” In The Oxford Handbook of Ethics of AI , edited by Dubber Markus D. , Pasquale Frank , Das Sunit , 25 – 51 .. Oxford : Oxford University Press .

Rademacher Timo . 2019 . “ Artificial Intelligence and Law Enforcement .” In Regulating Artificial Intelligence , edited by Wischmeyer Thomas , Rademacher Timo , 225 – 54 .. Cham: Springer .

Radu Roxana . 2021 . “ Steering the Governance of Artificial Intelligence: National Strategies in Perspective .” Policy and Society . 40 ( 2 ): 178 – 93 .

Raustiala Kal and David G. Victor . 2004 .“ The Regime Complex for Plant Genetic Resources .” International Organization , 58 ( 2 ): 277 – 309 .

Rességuier Anaïs , and Rodrigues Rowena . 2020 . “ AI Ethics Should Not Remain Toothless! A Call to Bring Back the Teeth of Ethics .” Big Data & Society . 7 ( 2 ). https://doi.org/10.1177/2053951720942541 .

Risse Thomas . 2012 . “ Transnational Actors and World Politics .” In Handbook of International Relations , 2nd ed., edited by Carlsnaes Walter , Risse Thomas , Simmons Beth A. . London : Sage .

Roach Steven C. , and Eckert Amy , eds. 2020 . Moral Responsibility in Twenty-First-Century Warfare: Just War Theory and the Ethical Challenges of Autonomous Weapons Systems . Albany, NY : State University of New York .

Roberts Huw , Cowls Josh , Morley Jessica , Taddeo Mariarosaria , Wang Vincent , and Floridi Luciano . 2021 . “ The Chinese Approach to Artificial Intelligence: An Analysis of Policy, Ethics, and Regulation .” AI & Society . 36 ( 1 ): 59 – 77 .

Rosenau James N . 1999 . “ Toward an Ontology for Global Governance .” In Approaches to Global Governance Theory , edited by Hewson Martin , Sinclair Timothy J. , 287 – 301 .. Albany, NY : SUNY Press .

Rosert Elvira , and Sauer Frank . 2018 . Perspectives for Regulating Lethal Autonomous Weapons at the CCW: A Comparative Analysis of Blinding Lasers, Landmines, and LAWS . Paper prepared for the workshop “New Technologies of Warfare: Implications of Autonomous Weapons Systems for International Relations,” 5th EISA European Workshops in International Studies , Groningen , 6-9 June 2018 . Internet (last accessed August 25, 2023): https://www.academia.edu/36768452/Perspectives_for_Regulating_Lethal_Autonomous_Weapons_at_the_CCW_A_Comparative_Analysis_of_Blinding_Lasers_Landmines_and_LAWS

Rosert Elvira , and Sauer Frank . 2019 . “ Prohibiting Autonomous Weapons: Put Human Dignity First .” Global Policy . 10 ( 3 ): 370 – 5 .

Russell Stuart J. , and Norvig Peter . 2020 . Artificial Intelligence: A Modern Approach . Boston, MA : Pearson .

Ruzicka Jan . 2018 . “ Behind the Veil of Good Intentions: Power Analysis of the Nuclear Non-proliferation Regime .” International Politics . 55 ( 3 ): 369 – 85 .

Saif Hassan , Dickinson Thomas , Kastler Leon , Fernandez Miriam , and Alani Harith . 2017 . “ A Semantic Graph-Based Approach for Radicalisation Detection on Social Media .” ESWC 2017: The Semantic Web—Proceedings, Part 1 , 571 – 87 .. Cham : Springer .

Schiff Daniel , Justin Biddle , Jason Borenstein , and Kelly Laas . 2020 . “ What’s Next for AI Ethics, Policy, and Governance? A Global Overview .” In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society . ACM , , .

Schmitt Lewin . 2021 . “ Mapping Global AI Governance: A Nascent Regime in a Fragmented Landscape .” AI and Ethics . 2 ( 2 ): 303 – 314 .

Scholte Jan Aart . ed. 2011 . Building Global Democracy? Civil Society and Accountable Global Governance . Cambridge : Cambridge University Press .

Schwitzgebel Eric , and Garza Mara . 2015 . “ A Defense of the Rights of Artificial Intelligences .” Midwest Studies In Philosophy . 39 ( 1 ): 98 – 119 .

Sparrow Robert . 2007 . “ Killer Robots .” Journal of Applied Philosophy . 24 ( 1 ): 62 – 77 .

Steffek Jens and Patrizia Nanz . 2008 . “ Emergent Patterns of Civil Society Participation in Global and European Governance ” In Civil Society Participation in European and Global Governance . edited by Jens Steffek , Claudia Kissling , and Patrizia Nanz Basingstoke: Palgrave Macmillan . 1–29.

Stone Randall. W . 2011 . Controlling Institutions: International Organizations and the Global Economy . Cambridge : Cambridge University Press .

Stop Killer Robots . 2023 . “ About Us.” . Accessed June 13, 2023, https://www.stopkillerrobots.org/about-us/ .

Susser Daniel , Roessler Beate , Nissenbaum Helen . 2019 . “ Technology, Autonomy, and Manipulation .” Internet Policy Review . 8 ( 2 ):. https://doi.org/10.14763/2019.2.1410 .

Taeihagh Araz . 2021 . “ Governance of Artificial Intelligence .” Policy and Society . 40 ( 2 ): 137 – 57 .

Tallberg Jonas , Sommerer Thomas , Squatrito Theresa , and Jönsson Christer . 2013 . The Opening Up of International Organizations . Cambridge : Cambridge University Press .

Tasioulas John . 2019 . “ First Steps Towards an Ethics of Robots and Artificial Intelligence .” The Journal of Practical Ethics . 7 ( 1 ): 61-95. https://doi.org/10.2139/ssrn.3172840 .

Thompson Nicholas , and Bremmer Ian . 2018. “ The AI Cold War that Threatens us all .” Wired, October 23. Internet (last accessed August 25, 2023): https://www.wired.com/story/ai-cold-war-china-coulddoom-us-all/ .

Trager Robert F. , and Luca Laura M. . 2022 . “ Killer Robots Are Here—And We Need to Regulate Them .” Foreign Policy, May 11 . Internet (last accessed August 25, 2023): https://foreignpolicy.com/2022/05/11/killer-robots-lethal-autonomous-weapons-systems-ukraine-libya-regulation/

Ubena John . 2022 . “ Can Artificial Intelligence be Regulated?” . Lessons from Legislative Techniques . In Nordic Yearbook of Law and Informatics 2020–2021: Law in the Era of Artificial Intelligence , edited by Colonna Lianne , Greenstein Stanley . Stockholm: The Swedish Law and Informatics Institute , Stockholm University.

Uhre Andreas Nordang . 2014 . “ Exploring the Diversity of Transnational Actors in Global Environmental Governance .” Interest Groups & Advocacy . 3 ( 1 ): 59 – 78 .

Ulnicane Inga . 2021 . “ Artificial Intelligence in the European Union: Policy, Ethics and Regulation .” In The Routledge Handbook of European Integrations , edited by Hoerber Thomas , Weber Gabriel , Cabras Ignazio . London : Routledge .

Valentini Laura . 2013 . “ Justice, Disagreement and Democracy .” British Journal of Political Science . 43 ( 1 ): 177 – 99 .

Valentini Laura . 2012 . “ Assessing the Global Order: Justice, Legitimacy, or Political Justice?” . Critical Review of International Social and Political Philosophy . 15 ( 5 ): 593 – 612 .

Vredenburgh Kate . 2022 . “ Fairness .” In The Oxford Handbook of AI Governance , edited by Bullock Justin B. , Chen Yu-Che , Himmelreich Johannes , Hudson Valerie M. , Korinek Anton , Young Matthew M. , Zhang Baobao . Oxford : Oxford University Press .

Verdier Daniel . 2022 . “ Bargaining Strategies for Governance Complex Games .” The Review of International Organizations , 17 ( 2 ): 349 – 371 .

Wagner Ben . 2018 . “ Ethics as an Escape from Regulation. From “Ethics-washing” to Ethics-shopping? .” In Being Profiled: Cogitas Ergo Sum. 10 Years of ‘Profiling the European Citizen , edited by Bayamiloglu Emre , Baraliuc Irina , Janssens Liisa , Hildebrandt Mireille Amsterdam : Amsterdam University Press .

Wahlgren Peter . 2022 . “ How to Regulate AI?” In Nordic Yearbook of Law and Informatics 2020–2021: Law in the Era of Artificial Intelligence , edited by Colonna Lianne , Greenstein Stanley . Stockholm: The Swedish Law and Informatics Institute, Stockholm University .

Weale Albert . 1999 . Democracy . New York : St Martin's Press .

Winfield Alan F. , Michael Katina , Pitt Jeremy , and Evers Vanessa . 2019 . “ Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems .” Proceedings of the IEEE . 107 ( 3 ): 509 – 17 .

Zaidi Waqar , Dafoe Allan . 2021 . International Control of Powerful Technology: Lessons from the Baruch Plan for Nuclear Weapons . Working Paper 2021: 9 . Oxford : Centre for the Governance of AI .

Zhu J. 2022 . “ AI ethics with Chinese Characteristics? Concerns and preferred solutions in Chinese academia .” AI & Society . https://doi.org/10.1007/s00146-022-01578-w .

Zimmermann Annette , and Lee-Stronach Chad . 2022 . “ Proceed with Caution .” Canadian Journal of Philosophy . 52 ( 1 ): 6 – 25 .

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1468-2486
  • Print ISSN 1521-9488
  • Copyright © 2024 International Studies Association
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Cart

  • SUGGESTED TOPICS
  • The Magazine
  • Newsletters
  • Managing Yourself
  • Managing Teams
  • Work-life Balance
  • The Big Idea
  • Data & Visuals
  • Reading Lists
  • Case Selections
  • HBR Learning
  • Topic Feeds
  • Account Settings
  • Email Preferences

Research: Negotiating Is Unlikely to Jeopardize Your Job Offer

  • Einav Hart,
  • Julia Bear,
  • Zhiying (Bella) Ren

research ethics review article

A series of seven studies found that candidates have more power than they assume.

Job seekers worry about negotiating an offer for many reasons, including the worst-case scenario that the offer will be rescinded. Across a series of seven studies, researchers found that these fears are consistently exaggerated: Candidates think they are much more likely to jeopardize a deal than managers report they are. This fear can lead candidates to avoid negotiating altogether. The authors explore two reasons driving this fear and offer research-backed advice on how anxious candidates can approach job negotiations.

Imagine that you just received a job offer for a position you are excited about. Now what? You might consider negotiating for a higher salary, job flexibility, or other benefits , but you’re apprehensive. You can’t help thinking: What if I don’t get what I ask for? Or, in the worst-case scenario, what if the hiring manager decides to withdraw the offer?

research ethics review article

  • Einav Hart is an assistant professor of management at George Mason University’s Costello College of Business, and a visiting scholar at the Wharton School. Her research interests include conflict management, negotiations, and organizational behavior.
  • Julia Bear is a professor of organizational behavior at the College of Business at Stony Brook University (SUNY). Her research interests include the influence of gender on negotiation, as well as understanding gender gaps in organizations more broadly.
  • Zhiying (Bella) Ren is a doctoral student at the Wharton School of the University of Pennsylvania. Her research focuses on conversational dynamics in organizations and negotiations.

Partner Center

medRxiv

Impact of the use of cannabis as a medicine in pregnancy, on the unborn child: a systematic review and meta-analysis protocol

  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: [email protected]
  • Info/History
  • Preview PDF

Introduction: The use of cannabis for medicinal purposes is on the rise. As more people place their trust in the safety of prescribed alternative plant-based medicine and find it easily accessible, there is a growing concern that pregnant women may be increasingly using cannabis for medicinal purposes to manage their pregnancy symptoms and other health conditions. The aim of this review is to investigate the use of cannabis for medicinal purposes during pregnancy, describe the characteristics of the demographic population, and to measure the impact on the unborn child and up to twelve months postpartum. Methods and analyses: Research on pregnant women who use cannabis for medicinal purposes only and infants up to one year after birth who experienced in utero exposure to cannabis for medicinal purposes will be included in this review. Reviews, randomised controlled trials, case control, cross-sectional and cohort studies, that have been peer reviewed and published between 1996 and April 2024 as a primary research paper that investigates prenatal use of cannabis for medicinal purposes on foetal, perinatal, and neonatal outcomes, will be selected for review. Excluding cover editorials, letters, commentaries, protocols, conference papers and book chapters. Effects of illicit drugs use, alcohol misuse and nicotine exposure on neonate outcome will be controlled by excluding studies reporting on the concomitant use of such substances with cannabis for medicinal purposes during pregnancy. All titles and abstracts will be reviewed independently and in duplicate by at least two researchers. Records will be excluded based on title and abstract screening as well as publication type. Where initial disagreement exists between reviewers regarding the inclusion of a study, team members will review disputed articles status until consensus is gained. Selected studies will then be assessed by at least two independent researchers for risk bias assessment using validated tools. Data will be extracted and analysed following a systematic review and meta-analysis methodology. The statistical analysis will combine three or more outcomes that are reported in a consistent manner. The systematic review and meta-analysis will follow the PRISMA guidelines to facilitate transparent reporting [1].

Competing Interest Statement

The authors have declared no competing interest.

Funding Statement

This study did not receive any funding.

Author Declarations

I confirm all relevant ethical guidelines have been followed, and any necessary IRB and/or ethics committee approvals have been obtained.

The details of the IRB/oversight body that provided approval or exemption for the research described are given below:

The study will use ONLY openly available human data from studies published in biomedical and scientific journals.

I confirm that all necessary patient/participant consent has been obtained and the appropriate institutional forms have been archived, and that any patient/participant/sample identifiers included were not known to anyone (e.g., hospital staff, patients or participants themselves) outside the research group so cannot be used to identify individuals.

I understand that all clinical trials and any other prospective interventional studies must be registered with an ICMJE-approved registry, such as ClinicalTrials.gov. I confirm that any such study reported in the manuscript has been registered and the trial registration ID is provided (note: if posting a prospective study registered retrospectively, please provide a statement in the trial ID field explaining why the study was not registered in advance).

I have followed all appropriate research reporting guidelines, such as any relevant EQUATOR Network research reporting checklist(s) and other pertinent material, if applicable.

Data Availability

All data produced in the present work are contained in the manuscript.

View the discussion thread.

Thank you for your interest in spreading the word about medRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Reddit logo

Citation Manager Formats

  • EndNote (tagged)
  • EndNote 8 (xml)
  • RefWorks Tagged
  • Ref Manager
  • Tweet Widget
  • Facebook Like
  • Google Plus One
  • Addiction Medicine (323)
  • Allergy and Immunology (627)
  • Anesthesia (163)
  • Cardiovascular Medicine (2365)
  • Dentistry and Oral Medicine (287)
  • Dermatology (206)
  • Emergency Medicine (378)
  • Endocrinology (including Diabetes Mellitus and Metabolic Disease) (833)
  • Epidemiology (11758)
  • Forensic Medicine (10)
  • Gastroenterology (702)
  • Genetic and Genomic Medicine (3726)
  • Geriatric Medicine (348)
  • Health Economics (632)
  • Health Informatics (2388)
  • Health Policy (929)
  • Health Systems and Quality Improvement (895)
  • Hematology (340)
  • HIV/AIDS (780)
  • Infectious Diseases (except HIV/AIDS) (13301)
  • Intensive Care and Critical Care Medicine (767)
  • Medical Education (365)
  • Medical Ethics (104)
  • Nephrology (398)
  • Neurology (3488)
  • Nursing (198)
  • Nutrition (523)
  • Obstetrics and Gynecology (673)
  • Occupational and Environmental Health (661)
  • Oncology (1819)
  • Ophthalmology (535)
  • Orthopedics (218)
  • Otolaryngology (286)
  • Pain Medicine (232)
  • Palliative Medicine (66)
  • Pathology (445)
  • Pediatrics (1031)
  • Pharmacology and Therapeutics (426)
  • Primary Care Research (420)
  • Psychiatry and Clinical Psychology (3172)
  • Public and Global Health (6133)
  • Radiology and Imaging (1276)
  • Rehabilitation Medicine and Physical Therapy (745)
  • Respiratory Medicine (825)
  • Rheumatology (379)
  • Sexual and Reproductive Health (372)
  • Sports Medicine (322)
  • Surgery (400)
  • Toxicology (50)
  • Transplantation (172)
  • Urology (145)

Hollow Multi-shelled Structure Photoelectric Materials: Multiple Shells Bring Novel Properties

  • Published: 15 May 2024

Cite this article

research ethics review article

  • Fengmei Su 1 , 2 ,
  • Jiawei Wan 1 , 2 &
  • Dan Wang 1 , 2  

Hollow multi-shelled structures (HoMS) have made significant strides across a wide spectrum of scientific investigations since the inception of the sequential templating approach (STA) in 2009, revealing distinctive temporal-spatial ordering properties. The recent establishment of a mathematical model for STA has not only demystified the formation of concentration waves within the STA process but also extended its relevance to gentler solution-based systems, thereby broadening the HoMS landscape. Herein, focusing on photoelectric applications, this review first summarizes the unique temporal-spatial ordering features of HoMS. Subsequentially, the greatly enhanced properties of light capture and absorption, exciton separation, and transfer are deeply discussed. Finally, we conclude with a perspective on the potential challenges and burgeoning opportunities that lie ahead in the advancement of HoMS development.

research ethics review article

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Mao D., Wan J., Wang J., Wang D., Adv. Mater. , 2018 , 31 , 1802874.

Article   Google Scholar  

Wang J., Wan J., Yang N., Li Q., Wang D., Nat. Rev. Chem. , 2020 , 4 , 159.

Article   CAS   PubMed   Google Scholar  

Wang J., Wan J., Wang D., Acc. Chem. Res. , 2019 , 52 , 2169.

Wang J., Tang H., Wang H., Yu R., Wang D., Mater. Chem. Front. , 2017 , 1 , 414.

Article   CAS   Google Scholar  

Zhu M., Tang J., Wei W., Li S., Mater. Chem. Front. , 2020 , 4 , 1105.

Zhao J., Yang M., Yang N., Wang J., Wang D., Chem. Res. Chinese Universities , 2020 , 36 , 313.

Wang Z., Yang N., Wang D., Chem. Sci. , 2020 , 11 , 5359.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Wei Y., Wan J., Yang N., Yang Y., Ma Y., Wang S., Wang J., Yu R., Gu L., Wang L., Wang L., Huang W., Wang D., Natl. Sci. Rev. , 2020 , 7 , 1638.

Wei Y., You F., Zhao D., Wan J., Gu L., Wang D., Angew. Chem. Int. Ed. , 2022 , 61 , e202212049.

Zhao D., Yang N., Wei Y., Jin Q., Wang Y., He H., Yang Y., Han B., Zhang S., Wang D., Nat. Commun. , 2020 , 11 , 4450.

Han W., Wei Y., Wan J., Nakagawa N., Wang D., Inorg. Chem. , 2022 , 61 , 5397.

Article   PubMed   Google Scholar  

Wang Z., Qi J., Yang N., Yu R., Wang D., Mater. Chem. Front , 2021 , 5 , 1126.

Wei Y., Wan J., Wang J., Zhang X., Yu R., Yang N., Wang D., Small , 2021 , 17 , 2005345.

Pei J., Yang L., Lin J., Zhang Z., Sun Z., Wang D., Chen W., Angew., Chem. Int. Ed. , 2023 , 63 , e202316123.

Shang H., Zhou X., Dong J., Li A., Zhao X., Liu Q., Lin Y., Pei J., Li Z., Jiang Z., Zhou D., Zheng L., Wang Y., Zhou J., Yang Z., Cao R., Sarangi R., Sun T., Yang X., Zheng X., Yan W., Zhuang Z., Li J., Chen W., Wang D., Zhang J., Li Y., Nat. Commun. , 2020 , 11 , 3049.

Xu R.-G., Yao J.-X., Lai X.-Y., Mao D., Xing C.-J., Wang D., Chem. Res. Chinese Universities , 2009 , 25 , 95.

Huang T., Yang M., Wang J., Zhang Sh., Du J., Wang D., Chem. J. Chinese Universities , 2023 , 44 , 20220276.

Google Scholar  

Gao S., Wang N., Li S., Li D., Cui Z., Yue G., Liu J., Zhao X., Jiang L., Zhao Y., Angew. Chem. Int. Ed. , 2020 , 59 , 2465.

Salhabi E. H. M., Zhao J., Wang J., Yang M., Wang B., Wang D., Angew. Chem. Int. Ed. , 2019 , 58, 9078.

Zhang J., Wan J., Wang J., Ren H., Yu R., Gu L., Liu Y., Feng S., Wang D., Angew. Chem. Int. Ed. , 2019 , 58 , 5266.

Bi R., Xu N., Ren H., Yang N., Sun Y., Cao A., Yu R., Wang D., Angew. Chem. Int. Ed. , 2020 , 59 , 4865.

Zhang X., Bi R., Wang J., Zheng M., Wang J., Yu R., Wang D., Adv. Mater. , 2023 , 35 , 2209354.

Bi R., Zhao J., Wang J., Yu R., Wang D., Chem J. Chinese Universities , 2023 , 44 , 20220453.

Wu Y., Du H., Zhu J., Xu N., Zhou L., Mai L., Chem J. Chinese Universities , 2023 , 44 , 20220689.

Wu Z., Li Z., Chou S., Liang X., Chem. Res. Chinese Universities , 2023 , 39 , 283.

Zhao X., Yang M., Wang J., Wang D., Chem. Res. Chinese Universities , 2023 , 39 , 630.

Li M., Mao D., Wan J., Wang F., Zhai T., Wang D., Inorg. Chem. Front. , 2019 , 6 , 1968.

Lai X., Li J., Korgel B. A., Dong Z., Li Z., Su F., Du J., Wang D., Angew. Chem. Int. Ed. , 2011 , 50 , 2738.

Xu J., Yao X., Wei W., Wang Z., Yu R., Mater. Res. Bull. , 2017 , 87, 214.

Wu C., Zhang X., Ning B., Yang J., Xie Y., Inorg. Chem. , 2009 , 48 , 6044.

Sun X. M., Li Y. D., Angew. Chem. Int. Ed. , 2004 , 43 , 3827.

Li Z. M., Lai X. Y., Wang H., Mao D., Xing C. J., Wang D., J. Phys. Chem. C , 2009 , 113 , 2792.

Wei Y., Cheng Y., Zhao D., Feng Y., Wei P., Wang J., Ge W., Wang D., Angew. Chem. Int. Ed. , 2023 , 62 , e202302621.

Wang L., Wan J., Zhao Y., Yang N., Wang D., J. Am. Chem. Soc. , 2019 , 141 , 2238.

Wang Z., Wei Y., Qi J., Wan J., Wang Z., Yu R., Wang D., Adv. Funct. Mater. , 2024 , 2316547.

Wei Y., Li J., Zhao D., Zhao Y., Zhang Q., Gu L., Wan J., Wang D., CCS Chemistry , 2024 , 1 , 1.

Yang D., Ma D., Adv. Opt. Mater , 2018 , 7 , 1800522.

Mihi A., Zhang C., Braun P. V., Angew. Chem. Int. Ed. , 2011 , 50 , 5711.

Du X., Zhang Q., He Z., Lin H., Yang G., Chen Z., Zheng C., Tao S., Chin. Chem. Lett. , 2023 , 34 , 107641.

Wei Y., Zhao D., Wan J., Wang D., Trends Chem. , 2022 , 4 , 1021.

Zhang P., Lou X. W., Adv. Mater. , 2019 , 31 , 1900281.

Wang L., Wan J., Wang J., Wang, D., Small Struct. , 2020 , 2 , 2000041.

Bi R., Mao D., Wang J., Yu R., Wang D., Acta Chim. Sinica , 2020 , 78 , 1200.

Wang J., Cui Y., Wang D., Adv. Mater. , 2019 , 31 , 1801993.

Wang C., Wang J., Hu W., Wang D., Chem. Res. Chinese Universities , 2020 , 36 , 68.

Yang H. G., Zeng H. C., J. Phys. Chem. B , 2004 , 108 , 3492.

Liu B., Zeng H. C., Small , 2005 , 1 , 566.

Lou X. W., Wang Y., Yuan C., Lee J. Y., Archer L. A., Adv. Mater. , 2006 , 18 , 2325.

Yin Y. D., Rioux R. M., Erdonmez C. K., Hughes S., Somorjai G. A., Alivisatos A. P., Science , 2004 , 304 , 711.

Fan H. J., Goesele U., Zacharias M., Small , 2007 , 3 , 1660.

Wang C., Xu Z., Liu R., Chem. Res. Chinese Universities , 2008 , 24 , 249.

Bao N., Shen L., Takata T., Domen K., Chem. Mater. , 2008 , 20 , 110.

Guan B. Y., Yu L., Wang X., Song S., Lou X. W., Adv. Mater. , 2017 , 29 , 1605051.

Lee J., Park J. C., Song H., Adv. Mater. , 2008 , 20 , 1523.

Chen Y., Chen H., Guo L., He Q., Chen F., Zhou J., Feng J., Shi J., ACS Nano , 2010 , 4 , 529.

Zeng H., Cai W., Liu P., Xu X., Zhou H., Klingshirn C., Kalt H., ACS Nano , 2008 , 2 , 1661.

Yu R., Li Z., Wang D., Xing C., Lai X., Xing X., Chem. Res. Chinese Universities , 2009 , 25 , 963.

CAS   Google Scholar  

Wang J., Tang H., Zhang L., Ren H., Yu R., Jin Q., Qi J., Mao D., Yang M., Wang Y., Liu P., Zhang Y., Wen Y., Gu L., Ma G., Su Z., Tang Z., Zhao H., Wang D., Nat. Energy , 2016 , 1 , 16050.

Zhu Y., Yang M., Huang Q., Wang D., Yu R., Wang J., Zheng Z., Wang D., Adv. Mater. , 2020 , 32 , 1906205.

Zhan S., Chen X., Xu B., Wang L., Tong L., Yu R., Yang N., Wang D., Nano Today , 2022 , 47 , 101626.

Li B., Wang J., Bi R., Yang N., Wan J., Jiang H., Gu L., Du J., Cao A., Gao W., Wang D., Adv. Mater. , 2022 , 34 , 2200206.

Hou P., Li D., Yang N., Wan J., Zhang C., Zhang X., Jiang H., Zhang Q., Gu L., Wang D., Angew. Chem. Int. Ed. , 2021 , 60 , 6926.

Wei Y., Yang N., Huang K., Wan J., You F., Yu R., Feng S., Wang D., Adv. Mater. , 2020 , 32 , 2002556.

Takanabe K., ACS Catalysis , 2017 , 7 , 8006.

Long M., Wang P., Fang H., Hu W., Adv. Funct. Mater. , 2018 , 29 , 1803807.

Restat L., Messmer C., Heydarian M., Heydarian M., Schoen J., Schubert M. C., Glunz S. W., Solar Rrl , 2024 , 8 , 2300887.

Zheng L. X., Hu K., Teng F., Fang X. S., Small , 2017 , 13 , 1602448.

Zhang P., Luan D. Y., Lou X. W., Adv. Mater. , 2020 , 32 , 2004561.

Wang Y., Wang S. B., Zhang S. L., Lou X. W., Angew. Chem. Int. Ed. , 2020 , 59 , 11918.

Hu L. F., Chen M., Shan W. Z., Zhan T. R., Liao M. Y., Fang X. S., Hu X. H., Wu L. M., Adv. Mater. , 2012 , 24 , 5872.

Zhao H., Chen J. F., Zhao Y., Jiang L., Sun J. W., Yun J., Adv. Mater. , 2008 , 20 , 3682.

Xiao Y., Gao Z., Wu D., Jiang Y., Liu N., Jiang K., Chem. Res. Chinese Universities , 2011 , 27 , 919.

Lien D. H., Dong Z., Retamal J. R. D., Wang H. P., Wei T. C., Wang D., He J. H., Cui Y., Adv. Mater. , 2018 , 30 , 1801972.

Korotin M. A., Anisimov V. I., Khomskii D. I., Sawatzky G. A., Phys. Rev. Lett. , 1998 , 80 , 4305.

Huang K., Geng Z., Sun Y., Feng S., Sci. Bull. , 2018 , 63 , 203.

Huang R., Lin J., Zhou J., Fan E., Zhang X., Chen R., Wu F., Li L., Small , 2021 , 17 , 2007597.

Li S.-L., Tsukagoshi K., Orgiu E., Samori P., Chem. Soc. Rev. , 2016 , 45 , 118.

Sawatzky G. A., Allen J. W., Phys. Rev. Lett. , 1984 , 53 , 2339.

Peng S., Gong F., Li L., Yu D., Ji D., Zhang T., Hu Z., Zhang Z., Chou S., Du Y., Ramakrishna S., J. Am. Chem. Soc. , 2018 , 140 , 13644.

Dong Z., Lai X., Halpert J. E., Yang N., Y L., Zhai J., Wang D., Tang Z., Jiang L., Adv. Mater. , 2012 , 24, 1046.

Dong Z., Ren H., Hessel C. M., Wang J., Yu R., Jin Q., Yang M., Hu Z., Chen Y., Tang Z., Zhao H., Wang D., Adv. Mater. , 2013 , 26 , 905.

Chen M., Hu L., Xu J., Liao M., Wu L., Fang X., Small , 2011 , 7 , 2449.

Wang X., Liao M., Zhong Y., Zheng J. Y., Tian W., Zhai T., Zhi C., Ma Y., Yao J., Bando Y., Golberg D., Adv. Mater. , 2012 , 24 , 3421.

Du Z., Fu D., Teng J., Wang L., Gao F., Yang W., Zhang H., Fang X., Small , 2019 , 15, e1905253.

Jeong H., Song H., Pak Y., Kwon I. K., Jo K., Lee H., Jung G. Y., Adv. Mater. , 2014 , 26 , 3445.

Ouyang W. X., Teng F., Fang X. S., Adv. Funct. Mater. , 2018 , 28 , 1707178.

Chen X., Yang H., Liu G., Gao F., Dai M., Hu Y., Chen H., Cao W., Hu P., Hu W., Adv. Funct. Mater. , 2018 , 28 , 1705153.

Wang X., Liao M. Y., Zhong Y. T., Zheng J. Y., Tian W., Zhai T. Y., Zhi C. Y., Ma Y., Yao J. N. A., Bando Y., Golberg D., Adv. Mater. , 2012 , 24 , 3421.

Chen M., Ye C. Y., Zhou S. X., Wu L. M., Adv. Mater. , 2013 , 25 , 5343.

Tian W., Zhang C., Zhai T. Y., Li S. L., Wang X., Liao M. Y., Tsukagoshi K., Golberg D., Bando Y., Chem. Commun. , 2013 , 49 , 3739.

Han W., Wang Y., Wan J., Wang D., Chem. Res. Chinese Universities , 2022 , 38 , 117.

Pan X., Zhang J., Zhou H., Liu R., Wu D., Wang R., Shen L., Tao L., Zhang J., Wang H., Nano-Micro Lett. , 2021 , 13 , 70.

Deng K., Liu Z., Wang M., Li L., Adv. Funct. Mater. , 2019 , 29 , 1900830.

Wang H., Jiang R., Sun M., Yin X., Guo Y., He M., Wang L., J. Mater. Chem. C , 2019 , 7 , 1948.

Nai J., Lou X. W., Adv. Mater. , 2019 , 31 , 1706825.

Zhang P., Lu X. F., Luan D., Lou X. W., Angew. Chem. Int. Ed. , 2020 , 59 , 8128.

Li L., Dai X. Y., Chen D. L., Zeng Y. X., Hu Y., Lou X. W., Angew. Chem. Int. Ed. , 2022 , 61 , e202205839.

Zhang P., Guan B. Y., Yu L., Lou X. W., Chem., 2018 , 4 , 162.

Download references

Acknowledgements

This work was supported by the National Natural Science Foundation of China (Nos.21931012, 22293043), the Foundation of the Youth Innovation Promotion Association of CAS (No.2020048) and IPE Project for Frontier Basic Research, China (No. QYJC-2023-08).

Author information

Authors and affiliations.

State Key Laboratory of Biochemical Engineering, Institute of Process Engineering, Chinese Academy of Sciences, Beijing, 100190, P. R. China

Fengmei Su, Jiawei Wan & Dan Wang

University of Chinese Academy of Sciences, Beijing, 100049, P. R. China

You can also search for this author in PubMed   Google Scholar

Corresponding authors

Correspondence to Jiawei Wan or Dan Wang .

Ethics declarations

WANG Dan is an editorial board member for Chemical Research in Chinese Universities and was not involved in the editorial review or the decision to publish this article. WAN Jiawei is a youth executive editorial board member for Chemical Research in Chinese Universities and was not involved in the editorial review or the decision to publish this article. The authors declare no conflicts of interest.

Rights and permissions

Reprints and permissions

About this article

Su, F., Wan, J. & Wang, D. Hollow Multi-shelled Structure Photoelectric Materials: Multiple Shells Bring Novel Properties. Chem. Res. Chin. Univ. (2024). https://doi.org/10.1007/s40242-024-4061-1

Download citation

Received : 14 March 2024

Accepted : 08 April 2024

Published : 15 May 2024

DOI : https://doi.org/10.1007/s40242-024-4061-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Hollow multi-shelled structure
  • Photoelectric conversion
  • Sequential templating approach
  • Light absorption
  • Charge transfer
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. (PDF) Human Rights Research and Ethics Review: Protecting Individuals

    research ethics review article

  2. (PDF) Ethics in Research & Publication

    research ethics review article

  3. (PDF) RESEARCH & PUBLICATION ETHICS

    research ethics review article

  4. Research Ethics

    research ethics review article

  5. Research Ethics Review Doc Template

    research ethics review article

  6. (PDF) Research Ethics part 1

    research ethics review article

VIDEO

  1. Ethics Exchange: Paediatrics and Research with Children

  2. Research Ethics: Where did it all begin?

  3. Ethical Considerations in Research

  4. Ethics In Research #researchmethodology #ethicsinresearch

  5. Research Ethics toolkit for supervisors & Researchers 20240313 090921 Meeting Recording

  6. Ethics Review Commission 7/19/2023

COMMENTS

  1. Improving the process of research ethics review

    The research ethics review process may appear to some like the proverbial black box. An application is submitted and considered and a decision is made: SUBMIT > REVIEW > DECISION. In reality, the first step to understanding and improving the process is recognizing that research ethics review involves more than just the REB. Contributing to the ...

  2. Research Ethics: Sage Journals

    Research Ethics is aimed at all readers and authors interested in ethical issues in the conduct of research, the regulation of research, the procedures and process of ethical review as well as broader ethical issues related to research such as scientific integrity and the end uses of research. ... Review article. First published Mar 8, 2024 ...

  3. Ethics review of big data research: What should stay and what should be

    Background Ethics review is the process of assessing the ethics of research involving humans. The Ethics Review Committee (ERC) is the key oversight mechanism designated to ensure ethics review. Whether or not this governance mechanism is still fit for purpose in the data-driven research context remains a debated issue among research ethics experts. Main text In this article, we seek to ...

  4. Improving the process of research ethics review

    From the academic hallways to the literature, characterizations of REBs and the research ethics review process are seldom complimentary. While numerous criticisms have been levelled, it is the time to decision that is most consistently maligned [6,7,8,9,10,11].Factors associated with lengthy review time include incomplete or poorly completed applications [7, 12, 13], lack of administrative ...

  5. Ethical Dilemmas in Qualitative Research: A Critical Literature Review

    The aim of this integrative review was to analyze and synthetize ethical dilemmas that occur during the progress of qualitative investigation and the strategies proposed to face them. The search for studies used LILACS and MEDLINE databases with descriptors "research ethics" and "qualitative research", originating 108 titles. Upon ...

  6. Improving research ethics review and governance can improve human

    The UK Health Research Authority has established partnerships with others involved in research review. 20 For example, research ethics committees often (re)-review the study design and statistical issues. 21 Such review should be done once, by those with training and experience in statistics and design. The Research Ethics Committee should ask ...

  7. Ethics and society review: Ethics reflection as a precondition to

    Few researchers in our study had engaged in formal ethics review prior to the ESR. Nearly 80% of survey respondents self-reported that they had engaged in informal conversations about ethics within the month prior to the ESR process, and a majority of interviewees mentioned engaging with research ethics frequently. However, only 8% of survey ...

  8. Advancing ethics review practices in AI research

    The implementation of ethics review processes is an important first step for anticipating and mitigating the potential harms of AI research. Its long-term success, however, requires a coordinated ...

  9. Research ethics review during the COVID-19 pandemic: An international

    Research ethics review committees (ERCs) worldwide faced daunting challenges during the COVID-19 pandemic. There was a need to balance rapid turnaround with rigorous evaluation of high-risk research protocols in the context of considerable uncertainty. This study explored the experiences and performance of ERCs during the pandemic. We conducted an anonymous, cross-sectional, global online ...

  10. Research Ethics

    4.5 Research ethics and patient approvals. It is the policy of Research Ethics to require a declaration about research ethics approval enabling a statement to be carried within the paginated pages of all published articles. To ensure anonymous peer-review, please include these details on the title page of your submission.

  11. Evaluating the Quality of Research Ethics Review and Oversight: A

    Research ethics review committees (RERCs) and Human Research Protection Programs (HRPPs) are responsible for protecting the rights and welfare of research participants while avoiding unnecessary inhibition of valuable research. Evaluating RERC/HRPP quality is vital to determining whether they are achieving these goals effectively and ...

  12. A scoping review of the literature featuring research ethics and

    The areas of Research Ethics (RE) and Research Integrity (RI) are rapidly evolving. Cases of research misconduct, other transgressions related to RE and RI, and forms of ethically questionable behaviors have been frequently published. The objective of this scoping review was to collect RE and RI cases, analyze their main characteristics, and discuss how these cases are represented in the ...

  13. Why research ethics should add retrospective review

    Research ethics is an integral part of research, especially that involving human subjects. However, concerns have been expressed that research ethics has come to be seen as a procedural concern focused on a few well-established ethical issues that researchers need to address to obtain ethical approval to begin their research. While such prospective review of research is important, we argue ...

  14. Full article: Principles for ethical research involving humans: ethical

    Morality, ethics and ethical practice. Ethics, or moral philosophy, is a branch of philosophy that seeks to address questions of morality. Morality refers to beliefs or standards about concepts like good and bad, right and wrong (Jennings Citation 2003).When used as the basis for guiding individual and collective behaviour, ethics takes on a normative function, helping individuals consider how ...

  15. Ethical review and qualitative research competence: Guidance for

    The role of ethical review is to ensure that ethical standards in research are met. In Australia this process is governed by the National Statement on the Ethical Conduct of Research Involving Humans (National Health and Medical Research Council, 2007 (revised 2015)).The National Statement (as it is called) provides both guidelines on ethical research conduct for those designing and conducting ...

  16. Principles of research ethics: A research primer for low- and middle

    Ethical oversight in the form of review boards and research ethics committees provide protection for research subjects as well as guidance for safe conduct of studies. As the number of collaborative emergency care research studies carried out in low- and middle-income countries increases, it is crucial to have a shared understanding of how ...

  17. PDF Improving the process of research ethics review

    This paper presents a model based on stakeholder responsibilities in the process of research ethics review and illustrates how each makes contributions to the time an application spends in this process. This model focusses on REBs operating under the auspices of academic institutions, typical in Canada and the USA.

  18. Scarcity of Medical Ethics Research in Allergy and Immunology: A Review

    A literature search revealed one prior review article published in 2013. 4 Moreover, this review article concentrated on ethics and allergy, ... These include research articles, letters, editorials, short communications, and review articles. Of these 150 articles, 52 were published from 2020-2023. ...

  19. Scarcity of Medical Ethics Research in Allergy and Immunology: A Review

    Medical ethics is relevant to the clinical practice of allergy and immunology regardless of the type of patient, disease state, or practice setting. When engaging in clinical care, performing research, or enacting policies on the accessibility and distribution of healthcare resources, physicians regularly make and justify decisions using the fundamental principles of medical ethics.

  20. The changing face of research support

    A new research ethics application system As a first step in moving to this joined up digital environment, we are implementing Worktribe Ethics as a new online system to manage the preparation, submission, review and approval of research ethics applications. This will replace the current offline and email-based process.

  21. Ethical Issues in Research: Perceptions of Researchers, Research Ethics

    Introduction. Research includes a set of activities in which researchers use various structured methods to contribute to the development of knowledge, whether this knowledge is theoretical, fundamental, or applied (Drolet & Ruest, accepted).University research is carried out in a highly competitive environment that is characterized by ever-increasing demands (i.e., on time, productivity ...

  22. Research Ethics

    Open Access Research article First published December 13, 2023 pp. 331-362. xml PDF / EPUB. ' It is a complex process, but it's very important to return these results to participants'. Stakeholders' perspectives on the ethical considerations for returning individual pharmacogenomics research results to people living with HIV.

  23. Global Governance of Artificial Intelligence: Next Steps for Empirical

    The purpose of this article is to outline an agenda for research into the global governance of AI. The article distinguishes between two broad perspectives: an empirical approach, aimed at mapping and explaining global AI governance; and a normative approach, aimed at developing and applying standards for appropriate global AI governance.

  24. Research: Negotiating Is Unlikely to Jeopardize Your Job Offer

    Her research interests include conflict management, negotiations, and organizational behavior. Julia Bear is a professor of organizational behavior at the College of Business at Stony Brook ...

  25. Impact of the use of cannabis as a medicine in pregnancy, on the unborn

    Introduction: The use of cannabis for medicinal purposes is on the rise. As more people place their trust in the safety of prescribed alternative plant-based medicine and find it easily accessible, there is a growing concern that pregnant women may be increasingly using cannabis for medicinal purposes to manage their pregnancy symptoms and other health conditions. The aim of this review is to ...

  26. The ethics review and the humanities and social sciences: disciplinary

    By contrast, the 'practical' 2 review of research is predominantly conducted under codes and protocols that are rooted in the biomedical sciences. Emmerich (2013) notes that these disciplinary fields are 'widely assumed to be the intellectual pater familias of the "research ethics community"' (p. 181). The principles underpinning ethics review and the practices that define the ...

  27. Hollow Multi-shelled Structure Photoelectric Materials ...

    Hollow multi-shelled structures (HoMS) have made significant strides across a wide spectrum of scientific investigations since the inception of the sequential templating approach (STA) in 2009, revealing distinctive temporal-spatial ordering properties. The recent establishment of a mathematical model for STA has not only demystified the formation of concentration waves within the STA process ...

  28. The Ethics of Research, Writing, and Publication

    The integrity of HERD is dependent on ethically conducted research reported in ethically written articles and published after ethically conducted peer-review and editing. Ethics in research, writing, and publication are critical in medicine and nursing—decisions that affect human lives often are influenced by knowledge that is disseminated in ...