Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base


  • Ethical Considerations in Research | Types & Examples

Ethical Considerations in Research | Types & Examples

Published on October 18, 2021 by Pritha Bhandari . Revised on June 22, 2023.

Ethical considerations in research are a set of principles that guide your research designs and practices. Scientists and researchers must always adhere to a certain code of conduct when collecting data from people.

The goals of human research often include understanding real-life phenomena, studying effective treatments, investigating behaviors, and improving lives in other ways. What you decide to research and how you conduct that research involve key ethical considerations.

These considerations work to

  • protect the rights of research participants
  • enhance research validity
  • maintain scientific or academic integrity

Table of contents

Why do research ethics matter, getting ethical approval for your study, types of ethical issues, voluntary participation, informed consent, confidentiality, potential for harm, results communication, examples of ethical failures, other interesting articles, frequently asked questions about research ethics.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe for research subjects.

You’ll balance pursuing important research objectives with using ethical research methods and procedures. It’s always necessary to prevent permanent or excessive harm to participants, whether inadvertent or not.

Defying research ethics will also lower the credibility of your research because it’s hard for others to trust your data if your methods are morally questionable.

Even if a research idea is valuable to society, it doesn’t justify violating the human rights or dignity of your study participants.

The only proofreading tool specialized in correcting academic writing - try for free!

The academic proofreading tool has been trained on 1000s of academic texts and by native English editors. Making it the most accurate and reliable proofreading tool for students.

research ethics in study

Try for free

Before you start any study involving data collection with people, you’ll submit your research proposal to an institutional review board (IRB) .

An IRB is a committee that checks whether your research aims and research design are ethically acceptable and follow your institution’s code of conduct. They check that your research materials and procedures are up to code.

If successful, you’ll receive IRB approval, and you can begin collecting data according to the approved procedures. If you want to make any changes to your procedures or materials, you’ll need to submit a modification application to the IRB for approval.

If unsuccessful, you may be asked to re-submit with modifications or your research proposal may receive a rejection. To get IRB approval, it’s important to explicitly note how you’ll tackle each of the ethical issues that may arise in your study.

There are several ethical issues you should always pay attention to in your research design, and these issues can overlap with each other.

You’ll usually outline ways you’ll deal with each issue in your research proposal if you plan to collect data from participants.

Voluntary participation means that all research subjects are free to choose to participate without any pressure or coercion.

All participants are able to withdraw from, or leave, the study at any point without feeling an obligation to continue. Your participants don’t need to provide a reason for leaving the study.

It’s important to make it clear to participants that there are no negative consequences or repercussions to their refusal to participate. After all, they’re taking the time to help you in the research process , so you should respect their decisions without trying to change their minds.

Voluntary participation is an ethical principle protected by international law and many scientific codes of conduct.

Take special care to ensure there’s no pressure on participants when you’re working with vulnerable groups of people who may find it hard to stop the study even when they want to.

Prevent plagiarism. Run a free check.

Informed consent refers to a situation in which all potential participants receive and understand all the information they need to decide whether they want to participate. This includes information about the study’s benefits, risks, funding, and institutional approval.

You make sure to provide all potential participants with all the relevant information about

  • what the study is about
  • the risks and benefits of taking part
  • how long the study will take
  • your supervisor’s contact information and the institution’s approval number

Usually, you’ll provide participants with a text for them to read and ask them if they have any questions. If they agree to participate, they can sign or initial the consent form. Note that this may not be sufficient for informed consent when you work with particularly vulnerable groups of people.

If you’re collecting data from people with low literacy, make sure to verbally explain the consent form to them before they agree to participate.

For participants with very limited English proficiency, you should always translate the study materials or work with an interpreter so they have all the information in their first language.

In research with children, you’ll often need informed permission for their participation from their parents or guardians. Although children cannot give informed consent, it’s best to also ask for their assent (agreement) to participate, depending on their age and maturity level.

Anonymity means that you don’t know who the participants are and you can’t link any individual participant to their data.

You can only guarantee anonymity by not collecting any personally identifying information—for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, and videos.

In many cases, it may be impossible to truly anonymize data collection . For example, data collected in person or by phone cannot be considered fully anonymous because some personal identifiers (demographic information or phone numbers) are impossible to hide.

You’ll also need to collect some identifying information if you give your participants the option to withdraw their data at a later stage.

Data pseudonymization is an alternative method where you replace identifying information about participants with pseudonymous, or fake, identifiers. The data can still be linked to participants but it’s harder to do so because you separate personal information from the study data.

Confidentiality means that you know who the participants are, but you remove all identifying information from your report.

All participants have a right to privacy, so you should protect their personal data for as long as you store or use it. Even when you can’t collect data anonymously, you should secure confidentiality whenever you can.

Some research designs aren’t conducive to confidentiality, but it’s important to make all attempts and inform participants of the risks involved.

As a researcher, you have to consider all possible sources of harm to participants. Harm can come in many different forms.

  • Psychological harm: Sensitive questions or tasks may trigger negative emotions such as shame or anxiety.
  • Social harm: Participation can involve social risks, public embarrassment, or stigma.
  • Physical harm: Pain or injury can result from the study procedures.
  • Legal harm: Reporting sensitive data could lead to legal risks or a breach of privacy.

It’s best to consider every possible source of harm in your study as well as concrete ways to mitigate them. Involve your supervisor to discuss steps for harm reduction.

Make sure to disclose all possible risks of harm to participants before the study to get informed consent. If there is a risk of harm, prepare to provide participants with resources or counseling or medical services if needed.

Some of these questions may bring up negative emotions, so you inform participants about the sensitive nature of the survey and assure them that their responses will be confidential.

The way you communicate your research results can sometimes involve ethical issues. Good science communication is honest, reliable, and credible. It’s best to make your results as transparent as possible.

Take steps to actively avoid plagiarism and research misconduct wherever possible.

Plagiarism means submitting others’ works as your own. Although it can be unintentional, copying someone else’s work without proper credit amounts to stealing. It’s an ethical problem in research communication because you may benefit by harming other researchers.

Self-plagiarism is when you republish or re-submit parts of your own papers or reports without properly citing your original work.

This is problematic because you may benefit from presenting your ideas as new and original even though they’ve already been published elsewhere in the past. You may also be infringing on your previous publisher’s copyright, violating an ethical code, or wasting time and resources by doing so.

In extreme cases of self-plagiarism, entire datasets or papers are sometimes duplicated. These are major ethical violations because they can skew research findings if taken as original data.

You notice that two published studies have similar characteristics even though they are from different years. Their sample sizes, locations, treatments, and results are highly similar, and the studies share one author in common.

Research misconduct

Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement about data analyses.

Research misconduct is a serious ethical issue because it can undermine academic integrity and institutional credibility. It leads to a waste of funding and resources that could have been used for alternative research.

Later investigations revealed that they fabricated and manipulated their data to show a nonexistent link between vaccines and autism. Wakefield also neglected to disclose important conflicts of interest, and his medical license was taken away.

This fraudulent work sparked vaccine hesitancy among parents and caregivers. The rate of MMR vaccinations in children fell sharply, and measles outbreaks became more common due to a lack of herd immunity.

Research scandals with ethical failures are littered throughout history, but some took place not that long ago.

Some scientists in positions of power have historically mistreated or even abused research participants to investigate research problems at any cost. These participants were prisoners, under their care, or otherwise trusted them to treat them with dignity.

To demonstrate the importance of research ethics, we’ll briefly review two research studies that violated human rights in modern history.

These experiments were inhumane and resulted in trauma, permanent disabilities, or death in many cases.

After some Nazi doctors were put on trial for their crimes, the Nuremberg Code of research ethics for human experimentation was developed in 1947 to establish a new standard for human experimentation in medical research.

In reality, the actual goal was to study the effects of the disease when left untreated, and the researchers never informed participants about their diagnoses or the research aims.

Although participants experienced severe health problems, including blindness and other complications, the researchers only pretended to provide medical care.

When treatment became possible in 1943, 11 years after the study began, none of the participants were offered it, despite their health conditions and high risk of death.

Ethical failures like these resulted in severe harm to participants, wasted resources, and lower trust in science and scientists. This is why all research institutions have strict ethical guidelines for performing research.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Measures of central tendency
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Thematic analysis
  • Cohort study
  • Peer review
  • Ethnography

Research bias

  • Implicit bias
  • Cognitive bias
  • Conformity bias
  • Hawthorne effect
  • Availability heuristic
  • Attrition bias
  • Social desirability bias

Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.

Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .

These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.

Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .

You can only guarantee anonymity by not collecting any personally identifying information—for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.

You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). Ethical Considerations in Research | Types & Examples. Scribbr. Retrieved February 15, 2024, from https://www.scribbr.com/methodology/research-ethics/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, data collection | definition, methods & examples, what is self-plagiarism | definition & how to avoid it, how to avoid plagiarism | tips on citing sources, what is your plagiarism score.

  • U.S. Department of Health & Human Services

National Institutes of Health (NIH) - Turning Discovery into Health

  • Virtual Tour
  • Staff Directory
  • En Español

You are here

Nih clinical research trials and you, guiding principles for ethical research.

Pursuing Potential Research Participants Protections

Female doctor talking to a senior couple at her desk.

“When people are invited to participate in research, there is a strong belief that it should be their choice based on their understanding of what the study is about, and what the risks and benefits of the study are,” said Dr. Christine Grady, chief of the NIH Clinical Center Department of Bioethics, to Clinical Center Radio in a podcast.

Clinical research advances the understanding of science and promotes human health. However, it is important to remember the individuals who volunteer to participate in research. There are precautions researchers can take – in the planning, implementation and follow-up of studies – to protect these participants in research. Ethical guidelines are established for clinical research to protect patient volunteers and to preserve the integrity of the science.

NIH Clinical Center researchers published seven main principles to guide the conduct of ethical research:

Social and clinical value

Scientific validity, fair subject selection, favorable risk-benefit ratio, independent review, informed consent.

  • Respect for potential and enrolled subjects

Every research study is designed to answer a specific question. The answer should be important enough to justify asking people to accept some risk or inconvenience for others. In other words, answers to the research question should contribute to scientific understanding of health or improve our ways of preventing, treating, or caring for people with a given disease to justify exposing participants to the risk and burden of research.

A study should be designed in a way that will get an understandable answer to the important research question. This includes considering whether the question asked is answerable, whether the research methods are valid and feasible, and whether the study is designed with accepted principles, clear methods, and reliable practices. Invalid research is unethical because it is a waste of resources and exposes people to risk for no purpose

The primary basis for recruiting participants should be the scientific goals of the study — not vulnerability, privilege, or other unrelated factors. Participants who accept the risks of research should be in a position to enjoy its benefits. Specific groups of participants  (for example, women or children) should not be excluded from the research opportunities without a good scientific reason or a particular susceptibility to risk.

Uncertainty about the degree of risks and benefits associated with a clinical research study is inherent. Research risks may be trivial or serious, transient or long-term. Risks can be physical, psychological, economic, or social. Everything should be done to minimize the risks and inconvenience to research participants to maximize the potential benefits, and to determine that the potential benefits are proportionate to, or outweigh, the risks.

To minimize potential conflicts of interest and make sure a study is ethically acceptable before it starts, an independent review panel should review the proposal and ask important questions, including: Are those conducting the trial sufficiently free of bias? Is the study doing all it can to protect research participants? Has the trial been ethically designed and is the risk–benefit ratio favorable? The panel also monitors a study while it is ongoing.

Potential participants should make their own decision about whether they want to participate or continue participating in research. This is done through a process of informed consent in which individuals (1) are accurately informed of the purpose, methods, risks, benefits, and alternatives to the research, (2) understand this information and how it relates to their own clinical situation or interests, and (3) make a voluntary decision about whether to participate.

Respect for potential and enrolled participants

Individuals should be treated with respect from the time they are approached for possible participation — even if they refuse enrollment in a study — throughout their participation and after their participation ends. This includes:

  • respecting their privacy and keeping their private information confidential
  • respecting their right to change their mind, to decide that the research does not match their interests, and to withdraw without a penalty
  • informing them of new information that might emerge in the course of research, which might change their assessment of the risks and benefits of participating
  • monitoring their welfare and, if they experience adverse reactions, unexpected effects, or changes in clinical status, ensuring appropriate treatment and, when necessary, removal from the study
  • informing them about what was learned from the research

More information on these seven guiding principles and on bioethics in general

This page last reviewed on March 16, 2016

Connect with Us

  • More Social Media from NIH

When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

  • PLOS Biology
  • PLOS Climate
  • PLOS Complex Systems
  • PLOS Computational Biology
  • PLOS Digital Health
  • PLOS Genetics
  • PLOS Global Public Health
  • PLOS Medicine
  • PLOS Mental Health
  • PLOS Neglected Tropical Diseases
  • PLOS Pathogens
  • PLOS Sustainability and Transformation
  • PLOS Collections

Understanding Scientific and Research Ethics

research ethics in study

How to pass journal ethics checks to ensure a smooth submission and publication process

Reputable journals screen for ethics at submission—and inability to pass ethics checks is one of the most common reasons for rejection. Unfortunately, once a study has begun, it’s often too late to secure the requisite ethical reviews and clearances. Learn how to prepare for publication success by ensuring your study meets all ethical requirements before work begins.

The underlying principles of scientific and research ethics

Scientific and research ethics exist to safeguard human rights, ensure that we treat animals respectfully and humanely, and protect the natural environment.

The specific details may vary widely depending on the type of research you’re conducting, but there are clear themes running through all research and reporting ethical requirements:

Documented 3rd party oversight

  • Consent and anonymity
  • Full transparency

If you fulfill each of these broad requirements, your manuscript should sail through any journal’s ethics check.

research ethics in study

If your research is 100% theoretical, you might be able to skip this one. But if you work with living organisms in any capacity—whether you’re administering a survey, collecting data from medical records, culturing cells, working with zebrafish, or counting plant species in a ring—oversight and approval by an ethics committee is a prerequisite for publication. This oversight can take many different forms:

For human studies and studies using human tissue or cells, obtain approval from your institutional review board (IRB). Register clinical trials with the World Health Organization (WHO) or International Committee of Medical Journal Editors (ICMJE). For animal research consult with your institutional animal care and use committee (IACUC). Note that there may be special requirements for non-human primates, cephalopods, and other specific species, as well as for wild animals. For field studies , anthropology and paleontology , the type of permission required will depend on many factors, like the location of the study, whether the site is publicly or privately owned, possible impacts on endangered or protected species, and local permit requirements. 

TIP: You’re not exempt until your committee tells you so

Even if you think your study probably doesn’t require approval, submit it to the review board anyway. Many journals won’t consider retrospective approvals. Obtaining formal approval or an exemption up front is worth it to ensure your research is eligible for publication in the future.

TIP: Keep your committee records close

Clearly label your IRB/IACUC paperwork, permit numbers, and any participant permission forms (including blank copies), and keep them in a safe place. You will need them when you submit to a journal. Providing these details proactively as part of your initial submission can minimize delays and get your manuscript through journal checks and into the hands of reviewers sooner.

Consent & anonymity

Obtaining consent from human subjects.

You may not conduct research on human beings unless the subjects understand what you are doing and agree to be a part of your study. If you work with human subjects, you must obtain informed written consent from the participants or their legal guardians. 

There are many circumstances where extra care may be required in order to obtain consent. The more vulnerable the population you are working with the stricter these guidelines will be. For example, your IRB may have special requirements for working with minors, the elderly, or developmentally delayed participants. Remember that these rules may vary from country to country. Providing a link to the relevant legal reference in your area can help speed the screening and approval process.

TIP: What if you are working with a population where reading and writing aren’t common?

Alternatives to written consent (such as verbal consent or a thumbprint) are acceptable in some cases, but consent still has to be clearly documented. To ensure eligibility for publication, be sure to:

  • Get IRB approval for obtaining verbal rather than written consent
  • Be prepared to explain why written consent could not be obtained
  • Keep a copy of the script you used to obtain this consent, and record when consent was obtained for your own records

Consent and reporting for human tissue and cell lines

Consent from the participant or their next-of-kin is also required for the use of human tissue and cell lines. This includes discarded tissue, for example the by-products of surgery.  

When working with cell lines transparency and good record keeping are essential. Here are some basic guidelines to bear in mind:

  • When working with established cell lines , cite the published article where the cell line was first described.
  • If you’re using repository or commercial cell lines ,  explain exactly which ones, and provide the catalog or repository number. 
  • If you received a cell line from a colleague , rather than directly from a repository or company, be sure to mention it. Explain who gifted the cells and when.
  • For a new cell line obtained from a colleague there may not be a published article to cite yet, but the work to generate the cell line must meet the usual requirements of consent—even if it was carried out by another research group. You’ll need to provide a copy of your colleagues’ IRB approval and details about the consent procedures in order to publish the work.

Finally, you’re obliged to keep your human subjects anonymous and to protect any identifying information in photos and raw data. Remove all names, birth dates, detailed addresses, or job information from files you plan to share. Blur faces and tattoos in any images. Details such as geography (city/country), gender, age, or profession may be shared at a generalized level and in aggregate. Read more about standards for de-identifying datasets in The BMJ .

TIP: Anonymity can be important in field work too

Be careful about revealing geographic data in fieldwork. You don’t want to tip poachers off to the location of the endangered elephant population you studied, or expose petroglyphs to vandalism.

Full Transparency

No matter the discipline, transparent reporting of methods, results, data, software and code is essential to ethical research practice. Transparency is also key to the future reproducibility of your work.

When you submit your study to a journal, you’ll be asked to provide a variety of statements certifying that you’ve obtained the appropriate permissions and clearances, and explaining how you conducted the work. You may also be asked to provide supporting documentation, including field records and raw data. Provide as much detail as you can at this stage. Clear and complete disclosure statements will minimize back-and-forth with the journal, helping your submission to clear ethics checks and move on to the assessment stage sooner.

TIP: Save that data

As you work, be sure to clearly label and organize your data files in a way that will make sense to you later. As close as you are to the work as you conduct your study, remember that two years could easily pass between capturing your data and publishing an article reporting the results. You don’t want to be stuck piecing together confusing records in order to create figures and data files for repositories.

Read our full guide to preparing data for submission .

Keep in mind that scientific and research ethics are always evolving. As laws change and as we learn more about influence, implicit bias and animal sentience, the scientific community continues to strive to elevate our research practice.

A checklist to ensure you’re ethics-check ready

Before you begin your research

Obtain approval from your IRB, IACUC or other approving body

Obtain written informed consent from human participants, guardians or next-of-kin

Obtain permits or permission from property owners, or confirm that permits are not required

Label and save all of records

As you work

Adhere strictly to the protocols approved by your committee

Clearly label your data, and store it in a way that will make sense to your future self

As you write, submit and deposit your results

Be ready to cite specific approval organizations, permit numbers, cell lines, and other details in your ethics statement and in the methods section of your manuscript

Anonymize all participant data (including human and in some cases animal or geographic data)

If a figure does include identifying information (e.g. a participant’s face) obtain special consent

The contents of the Peer Review Center are also available as a live, interactive training session, complete with slides, talking points, and activities. …

The contents of the Writing Center are also available as a live, interactive training session, complete with slides, talking points, and activities. …

There’s a lot to consider when deciding where to submit your work. Learn how to choose a journal that will help your study reach its audience, while reflecting your values as a researcher…

OEC logo

Site Search

  • How to Search
  • Advisory Group
  • Editorial Board
  • OEC Fellows
  • History and Funding
  • Using OEC Materials
  • Collections
  • Research Ethics Resources
  • Ethics Projects
  • Communities of Practice
  • Get Involved
  • Submit Content
  • Open Access Membership
  • Become a Partner

Introduction: What is Research Ethics?

Research Ethics is defined here to be the ethics of the planning, conduct, and reporting of research. This introduction covers what research ethics is, its ethical distinctions, approaches to teaching research ethics, and other resources on this topic.

What is Research Ethics

Why Teach Research Ethics

Animal Subjects



Conflicts of Interest

Data Management

Human Subjects

Peer Review


Research Misconduct

Social Responsibility

Stem Cell Research


Descriptions of educational settings , including in the classroom, and in research contexts.

Case Studies

Other Discussion Tools

Information about the history and authors of the Resources for Research Ethics Collection

What is Research Ethics?

Research Ethics is defined here to be the ethics of the planning, conduct, and reporting of research. It is clear that research ethics should include:

  • Protections of human and animal subjects

However, not all researchers use human or animal subjects, nor are the ethical dimensions of research confined solely to protections for research subjects. Other ethical challenges are rooted in many dimensions of research, including the:

  • Collection, use, and interpretation of research data
  • Methods for reporting and reviewing research plans or findings
  • Relationships among researchers with one another
  • Relationships between researchers and those that will be affected by their research
  • Means for responding to misunderstandings, disputes, or misconduct
  • Options for promoting ethical conduct in research

The domain of research ethics is intended to include nothing less than the fostering of research that protects the interests of the public, the subjects of research, and the researchers themselves.

Ethical Distinctions

In discussing or teaching research ethics, it is important to keep some basic distinctions in mind.

  • It is important not to confuse moral claims about how people ought to behave with descriptive claims about how they in fact do behave. From the fact that gift authorship or signing off on un-reviewed data may be "common practice" in some contexts, it doesn't follow that they are morally or professionally justified. Nor is morality to be confused with the moral beliefs or ethical codes that a given group or society holds (how some group thinks people should live). A belief in segregation is not morally justified simply because it is widely held by a group of people or given society. Philosophers term this distinction between prescriptive and descriptive claims the 'is-ought distinction.'  
  • A second important distinction is that between morality and the law. The law may or may not conform to the demands of ethics (Kagan, 1998). To take a contemporary example: many believe that the law prohibiting federally funded stem cell research is objectionable on moral (as well as scientific) grounds, i.e., that such research can save lives and prevent much human misery. History is full of examples of bad laws, that is laws now regarded as morally unjustifiable, e.g., the laws of apartheid, laws prohibiting women from voting or inter-racial couples from marrying.  
  • It is also helpful to distinguish between two different levels of discussion (or two different kinds of ethical questions): first-order or "ground-level" questions and second-order questions.  
  • First-order moral questions concern what we should do. Such questions may be very general or quite specific. One might ask whether the tradition of 'senior' authorship should be defended and preserved or, more generally, what are the principles that should go into deciding the issue of 'senior' authorship. Such questions and the substantive proposals regarding how to answer them belong to the domain of what moral philosophers call 'normative ethics.'  
  • Second-order moral questions concern the nature and purpose of morality itself. When someone claims that falsifying data is wrong, what exactly is the standing of this claim? What exactly does the word 'wrong' mean in the conduct of scientific research? And what are we doing when we make claims about right and wrong, scientific integrity and research misconduct? These second-order questions are quite different from the ground-level questions about how to conduct one's private or professional life raised above. They concern the nature of morality rather than its content, i.e., what acts are required, permitted or prohibited. This is the domain of what moral philosophers call 'metaethics' (Kagan, 1998).

Ethical Approaches

Each of these approaches provides moral principles and ways of thinking about the responsibilities, duties and obligations of moral life. Individually and jointly, they can provide practical guidance in ethical decision-making.

  • One of the most influential and familiar approaches to ethics is deontological ethics, associated with Immanuel Kant (1742-1804). Deontological ethics hold certain acts as right or wrong in themselves, e.g., promise breaking or lying. So, for example, in the context of research, fraud, plagiarism and misrepresentation are regarded as morally wrong in themselves, not simply because they (tend to) have bad consequences. The deontological approach is generally grounded in a single fundamental principle: Act as you would wish others to act towards you OR always treat persons as an end, never as a means to an end.  
  • From such central principles are derived rules or guidelines for what is permitted, required and prohibited. Objections to principle-based or deontological ethics include the difficulty of applying highly general principles to specific cases, e.g.: Does treating persons as ends rule out physician-assisted suicide, or require it? Deontological ethics is generally contrasted to consequentialist ethics (Honderich, 1995).  
  • According to consequentialist approaches, the rightness or wrongness of an action depends solely on its consequences. One should act in such a way as to bring about the best state of affairs, where the best state of affairs may be understood in various ways, e.g., as the greatest happiness for the greatest number of people, maximizing pleasure and minimizing pain or maximizing the satisfaction of preferences. A theory such as Utilitarianism (with its roots in the work of Jeremy Bentham and John Stuart Mill) is generally taken as the paradigm example of consequentialism. Objections to consequentialist ethics tend to focus on its willingness to regard individual rights and values as "negotiable." So, for example, most people would regard murder as wrong independently of the fact that killing one person might allow several others to be saved (the infamous sacrifice of an ailing patient to provide organs for several other needy patients). Similarly, widespread moral opinion holds certain values important (integrity, justice) not only because they generally lead to good outcomes, but in and of themselves.
  • Virtue ethics focuses on moral character rather than action and behavior considered in isolation. Central to this approach is the question what ought we (as individuals, as scientists, as physicians) to be rather than simply what we ought to do. The emphasis here is on inner states, that is, moral dispositions and habits such as courage or a developed sense of personal integrity. Virtue ethics can be a useful approach in the context of RCR and professional ethics, emphasizing the importance of moral virtues such as compassion, honesty, and respect. This approach has also a great deal to offer in discussions of bioethical issues where a traditional emphasis on rights and abstract principles frequently results in polarized, stalled discussions (e.g., abortion debates contrasting the rights of the mother against the rights of the fetus).  
  • The term 'an ethics of care' grows out of the work of Carol Gilligan, whose empirical work in moral psychology claimed to discover a "different voice," a mode of moral thinking distinct from principle-based moral thinking (e.g., the theories of Kant and Mill). An ethics of care stresses compassion and empathetic understanding, virtues Gilligan associated with traditional care-giving roles, especially those of women.  
  • This approach differs from traditional moral theories in two important ways. First, it assumes that it is the connections between persons, e.g., lab teams, colleagues, parents and children, student and mentor, not merely the rights and obligations of discrete individuals that matter. The moral world, on this view, is best seen not as the interaction of discrete individuals, each with his or her own interests and rights, but as an interrelated web of obligations and commitment. We interact, much of the time, not as private individuals, but as members of families, couples, institutions, research groups, a given profession and so on. Second, these human relationships, including relationships of dependency, play a crucial role on this account in determining what our moral obligations and responsibilities are. So, for example, individuals have special responsibilities to care for their children, students, patients, and research subjects.  
  • An ethics of care is thus particularly useful in discussing human and animal subjects research, issues of informed consent, and the treatment of vulnerable populations such as children, the infirm or the ill.  
  • The case study approach begins from real or hypothetical cases. Its objective is to identify the intuitively plausible principles that should be taken into account in resolving the issues at hand. The case study approach then proceeds to critically evaluate those principles. In discussing whistle-blowing, for example, a good starting point is with recent cases of research misconduct, seeking to identify and evaluate principles such as a commitment to the integrity of science, protecting privacy, or avoiding false or unsubstantiated charges. In the context of RCR instruction, case studies provide one of the most interesting and effective approaches to developing sensitivity to ethical issues and to honing ethical decision-making skills.  
  • Strictly speaking, casuistry is more properly understood as a method for doing ethics rather than as itself an ethical theory. However, casuistry is not wholly unconnected to ethical theory. The need for a basis upon which to evaluate competing principles, e.g., the importance of the well-being of an individual patient vs. a concern for just allocation of scarce medical resources, makes ethical theory relevant even with case study approaches.  
  • Applied ethics is a branch of normative ethics. It deals with practical questions particularly in relation to the professions. Perhaps the best known area of applied ethics is bioethics, which deals with ethical questions arising in medicine and the biological sciences, e.g., questions concerning the application of new areas of technology (stem cells, cloning, genetic screening, nanotechnology, etc.), end of life issues, organ transplants, and just distribution of healthcare. Training in responsible conduct of research or "research ethics" is merely one among various forms of professional ethics that have come to prominence since the 1960s. Worth noting, however, is that concern with professional ethics is not new, as ancient codes such as the Hippocratic Oath and guild standards attest (Singer, 1986).
  • Adams D, Pimple KD (2005): Research Misconduct and Crime: Lessons from Criminal Science on Preventing Misconduct and Promoting Integrity. Accountability in Research 12(3):225-240.
  • Anderson MS, Horn AS, Risbey KR, Ronning EA, De Vries R, Martinson BC (2007): What Do Mentoring and Training in the Responsible Conduct of Research Have To Do with Scientists' Misbehavior? Findings from a National Survey of NIH-Funded Scientists . Academic Medicine 82(9):853-860.
  • Bulger RE, Heitman E (2007): Expanding Responsible Conduct of Research Instruction across the University. Academic Medicine. 82(9):876-878.
  • Kalichman MW (2006): Ethics and Science: A 0.1% solution. Issues in Science and Technology 23:34-36.
  • Kalichman MW (2007): Responding to Challenges in Educating for the Responsible Conduct of Research, Academic Medicine. 82(9):870-875.
  • Kalichman MW, Plemmons DK (2007): Reported Goals for Responsible Conduct of Research Courses. Academic Medicine. 82(9):846-852.
  • Kalichman MW (2009): Evidence-based research ethics. The American Journal of Bioethics 9(6&7): 85-87.
  • Pimple KD (2002): Six Domains of Research Ethics: A Heuristic Framework for the Responsible Conduct of Research. Science and Engineering Ethics 8(2):191-205.
  • Steneck NH (2006): Fostering Integrity in Research: Definitions, Current Knowledge, and Future Directions. Science and Engineering Ethics 12:53-74.
  • Steneck NH, Bulger RE (2007): The History, Purpose, and Future of Instruction in the Responsible Conduct of Research. Academic Medicine. 82(9):829-834.
  • Vasgird DR (2007): Prevention over Cure: The Administrative Rationale for Education in the Responsible Conduct of Research. Academic Medicine. 82(9):835-837.
  • Aristotle. The Nichomachean Ethics.
  • Beauchamp RL, Childress JF (2001): Principles of Biomedical Ethics, 5th edition, NY: Oxford University Press.
  • Bentham, J (1781): An Introduction to the Principles of Morals and Legislation.
  • Gilligan C (1993): In a Different Voice: Psychological Theory and Women's Development. Cambridge: Harvard University Press.
  • Glover, Jonathan (1977): Penguin Books.
  • Honderich T, ed. (1995): The Oxford Companion to Philosophy, Oxford and New York: Oxford University Press.
  • Kagan S (1998): Normative Ethics. Westview Press.
  • Kant I (1785): Groundwork of the Metaphysics of Morals.
  • Kant I (1788): Critique of Practical Reason.
  • Kant I (1797): The Metaphysics of Morals.
  • Kant I (1797): On a Supposed right to Lie from Benevolent Motives.
  • Kuhse H, Singer P (1999): Bioethics: An Anthology. Blackwell Publishers.
  • Mill JS (1861): Utilitarianism.
  • Rachels J (1999): The Elements of Moral Philosophy, 3rd edition, Boston: McGraw-Hill.
  • Regan T (1993): Matters of Life and Death: New Introductory Essays in Moral Philosophy, 3rd edition. New York: McGraw-Hill. The history of ethics.
  • Singer P (1993): Practical Ethics, 2nd ed. Cambridge University Press.

The Resources for Research Ethics Education site was originally developed and maintained by Dr. Michael Kalichman, Director of the Research Ethics Program at the University of California San Diego. The site was transferred to the Online Ethics Center in 2021 with the permission of the author.

Related Resources

Submit Content to the OEC   Donate

NSF logo

This material is based upon work supported by the National Science Foundation under Award No. 2055332. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

Public Health Notes

Your partner for better health, research ethics: definition, principles and advantages.

October 13, 2020 Kusum Wagle Epidemiology 0

research ethics in study

Table of Contents

What is Research Ethics?

  • Ethics are the set of rules that govern our expectations of our own and others’ behavior.
  • Research ethics are the set of ethical guidelines that guides us on how scientific research should be conducted and disseminated.
  • Research ethics govern the standards of conduct for scientific researchers It is the guideline for responsibly conducting the research.
  • Research that implicates human subjects or contributors rears distinctive and multifaceted ethical, legitimate, communal and administrative concerns.
  • Research ethics is unambiguously concerned in the examination of ethical issues that are upraised when individuals are involved as participants in the study.
  • Research ethics committee/Institutional Review Board (IRB) reviews whether the research is ethical enough or not to protect the rights, dignity and welfare of the respondents.

Objectives of Research Ethics:

  • The first and comprehensive objective – to guard/protect human participants, their dignity, rights and welfare .
  • The second objective – to make sure that research is directed in a manner that assists welfares of persons, groups and/or civilization as a whole.
  • The third objective – to inspect particular research events and schemes for their ethical reliability, considering issues such as the controlling risk, protection of privacy and the progression of informed consent.

Principles of Research Ethics:

research ethics in study

The general principles of research ethics are:

Broad categorization of principles of research ethics:.

Broadly categorizing, there are mainly five principles of research ethics:


It is necessary to minimize any sort of harm to the participants. There are a number of forms of harm that participants can be exposed to. They are:

  • Bodily harm to contributors.
  • Psychological agony and embarrassment.
  • Social drawback.
  • Violation of participant’s confidentiality and privacy.

In order to minimize the risk of harm, the researcher/data collector should:

  • Obtain  informed consent from participants.
  • Protecting anonymity and confidentiality of participants.
  • Avoiding  misleading practices when planning research.
  • Providing participants with the  right to withdraw .


One of the fundamentals of research ethics is the notion of  informed consent .

Informed consent means that a person knowingly, voluntarily and intelligently gives consent to participate in a research.

Informed consent means that the participants should be well-informed about the:

  • Introduction and objective of the research
  • Purpose of the discussion
  • Anticipated advantages, benefits/harm from the research (if any)
  • Use of research
  • Their role in research
  • Methods which will be used to protect anonymity and confidentiality of the participant
  • Freedom to not answer any question/withdraw from the research
  • Who to contact if the participant need additional information about the research


Protecting the  anonymity  and  confidentiality  of research participants is an additionally applied constituent of research ethics.

Protecting anonymity: It means keeping the participant anonymous. It involves not revealing the name, caste or any other information about the participants that may reveal his/her identity.

Maintaining confidentiality: It refers to ensuring that the information given by the participant are confidential and not shared with anyone, except the research team. It is also about keeping the information secretly from other people.


  • The researcher should avoid all the deceptive and misleading practices that might misinform the respondent.
  • It includes avoiding all the activities like communicating wrong messages, giving false assurance, giving false information etc.


  • Participants have to have the right to withdraw at any point of the research.
  • When any respondent decides on to withdraw from the research, they should not be  stressed or  forced  in any manner to try to discontinue them from withdrawing.

Apart from the above-mentioned ethics, other ethical aspects things that must be considered while doing research are:

Protection of vulnerable groups of people:

  • Vulnerability is one distinctive feature of people incapable to protect their moralities and wellbeing. Vulnerable groups comprise captive populations (detainees, established, students, etc.), mentally ill persons, and aged people, children, critically ill or dying, poor, with learning incapacities, sedated or insensible.
  • Their participation in research can be endorsed to their incapability to give an informed consent and to the need for their further safety and sensitivity from the research/researcher as they are in a greater risk of being betrayed, exposed or forced to participate.

  Skills of the researcher:

  • Researchers should have the basic skills and familiarity for the specific study to be carried out and be conscious of the bounds of personal competence in research.
  • Any lack of knowledge in the area under research must be clearly specified.
  • Inexperienced researchers should work under qualified supervision that has to be revised by an ethics commission.

Advantages of Research Ethics:

  • Research ethics promote the aims of research.
  • It increases trust among the researcher and the respondent.
  • It is important to adhere to ethical principles in order to protect the dignity, rights and welfare of research participants.
  • Researchers can be held accountable and answerable for their actions.
  • Ethics promote social and moral values.
  • Promote s the  ambitions of research, such as understanding, veracity, and dodging of error.
  • Ethical standards uphold the  values that are vital to cooperative work , such as belief, answerability, mutual respect, and impartiality.
  • Ethical norms in research also aid to construct  public upkeep for research. People are more likely to trust a research project if they can trust the worth and reliability of research.

Limitations of Research Ethics:

For subjects:

  • Possibilities to physical integrity, containing those linked with experimental drugs and dealings and with other involvements that will be used in the study (e.g. measures used to observe research participants, such as blood sampling, X-rays or lumbar punctures).
  • Psychological risks: for example, a questionnaire may perhaps signify a risk if it fears traumatic events or happenings that are especially traumatic.
  • Social, legal and economic risks : for example, if personal information collected during a study is unintentionally released, participants might face a threat of judgment and stigmatization.
  • Certain tribal or inhabitant groups may possibly suffer from discrimination or stigmatization, burdens because of research, typically if associates of those groups are recognized as having a greater-than-usual risk of devouring a specific disease.
  • The research may perhaps have an influence on the prevailing health system: for example, human and financial capitals dedicated to research may distract attention from other demanding health care necessities in the community.

How can we ensure ethics at different steps of research?

The following process helps to ensure ethics at different steps of research:

  • Collect the facts and talk over intellectual belongings openly
  • Outline the ethical matters
  • Detect the affected parties (stakeholders)
  • Ascertain the forfeits
  • Recognize the responsibilities (principles, rights, justice)
  • Contemplate your personality and truthfulness
  • Deliberate innovatively about possible actions
  • Respect privacy and confidentiality
  • Resolve on the appropriate ethical action and be willing to deal with divergent point of view.

References and For More Information:








https://www.who.int/ethics/research/en/#:~:text=WHO%20Research%20Ethics%20Review%20Committee,financially%20or%20technically%20by%20WHO .






https://libguides.library.cityu.edu.hk/researchmethods/ethics#:~:text=Methods%20by%20Subject-,What%20is%20Research%20Ethics%3F,ensure%20a%20high%20ethical%20standard .




  • advantages of research ethics
  • difference between confidentiality and anonymity in research
  • minimizing the risk of harm in research
  • obtaining informed consent in research
  • principles of research ethics
  • what are the advantages of research ethics
  • what are the limitations of research ethics
  • what are the principles of research ethics
  • what is obtaining informed consent in research
  • what is research ethics
  • what is right to withdraw in research

' src=

Copyright © 2024 | WordPress Theme by MH Themes

Ethical Considerations In Psychology Research

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, Ph.D., is a qualified psychology teacher with over 18 years experience of working in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Ethics refers to the correct rules of conduct necessary when carrying out research. We have a moral responsibility to protect research participants from harm.

However important the issue under investigation, psychologists must remember that they have a duty to respect the rights and dignity of research participants. This means that they must abide by certain moral principles and rules of conduct.

What are Ethical Guidelines?

In Britain, ethical guidelines for research are published by the British Psychological Society, and in America, by the American Psychological Association. The purpose of these codes of conduct is to protect research participants, the reputation of psychology, and psychologists themselves.

Moral issues rarely yield a simple, unambiguous, right or wrong answer. It is, therefore, often a matter of judgment whether the research is justified or not.

For example, it might be that a study causes psychological or physical discomfort to participants; maybe they suffer pain or perhaps even come to serious harm.

On the other hand, the investigation could lead to discoveries that benefit the participants themselves or even have the potential to increase the sum of human happiness.

Rosenthal and Rosnow (1984) also discuss the potential costs of failing to carry out certain research. Who is to weigh up these costs and benefits? Who is to judge whether the ends justify the means?

Finally, if you are ever in doubt as to whether research is ethical or not, it is worthwhile remembering that if there is a conflict of interest between the participants and the researcher, it is the interests of the subjects that should take priority.

Studies must now undergo an extensive review by an institutional review board (US) or ethics committee (UK) before they are implemented. All UK research requires ethical approval by one or more of the following:

  • Department Ethics Committee (DEC) : for most routine research.
  • Institutional Ethics Committee (IEC) : for non-routine research.
  • External Ethics Committee (EEC) : for research that s externally regulated (e.g., NHS research).

Committees review proposals to assess if the potential benefits of the research are justifiable in light of the possible risk of physical or psychological harm.

These committees may request researchers make changes to the study’s design or procedure or, in extreme cases, deny approval of the study altogether.

The British Psychological Society (BPS) and American Psychological Association (APA) have issued a code of ethics in psychology that provides guidelines for conducting research.  Some of the more important ethical issues are as follows:

Informed Consent

Before the study begins, the researcher must outline to the participants what the research is about and then ask for their consent (i.e., permission) to participate.

An adult (18 years +) capable of being permitted to participate in a study can provide consent. Parents/legal guardians of minors can also provide consent to allow their children to participate in a study.

Whenever possible, investigators should obtain the consent of participants. In practice, this means it is not sufficient to get potential participants to say “Yes.”

They also need to know what it is that they agree to. In other words, the psychologist should, so far as is practicable, explain what is involved in advance and obtain the informed consent of participants.

Informed consent must be informed, voluntary, and rational. Participants must be given relevant details to make an informed decision, including the purpose, procedures, risks, and benefits. Consent must be given voluntarily without undue coercion. And participants must have the capacity to rationally weigh the decision.

Components of informed consent include clearly explaining the risks and expected benefits, addressing potential therapeutic misconceptions about experimental treatments, allowing participants to ask questions, and describing methods to minimize risks like emotional distress.

Investigators should tailor the consent language and process appropriately for the study population. Obtaining meaningful informed consent is an ethical imperative for human subjects research.

The voluntary nature of participation should not be compromised through coercion or undue influence. Inducements should be fair and not excessive/inappropriate.

However, it is not always possible to gain informed consent.  Where the researcher can’t ask the actual participants, a similar group of people can be asked how they would feel about participating.

If they think it would be OK, then it can be assumed that the real participants will also find it acceptable. This is known as presumptive consent.

However, a problem with this method is that there might be a mismatch between how people think they would feel/behave and how they actually feel and behave during a study.

In order for consent to be ‘informed,’ consent forms may need to be accompanied by an information sheet for participants’ setting out information about the proposed study (in lay terms), along with details about the investigators and how they can be contacted.

Special considerations exist when obtaining consent from vulnerable populations with decisional impairments, such as psychiatric patients, intellectually disabled persons, and children/adolescents. Capacity can vary widely so should be assessed individually, but interventions to improve comprehension may help. Legally authorized representatives usually must provide consent for children.

Participants must be given information relating to the following:

  • A statement that participation is voluntary and that refusal to participate will not result in any consequences or any loss of benefits that the person is otherwise entitled to receive.
  • Purpose of the research.
  • All foreseeable risks and discomforts to the participant (if there are any). These include not only physical injury but also possible psychological.
  • Procedures involved in the research.
  • Benefits of the research to society and possibly to the individual human subject.
  • Length of time the subject is expected to participate.
  • Person to contact for answers to questions or in the event of injury or emergency.
  • Subjects” right to confidentiality and the right to withdraw from the study at any time without any consequences.
Debriefing after a study involves informing participants about the purpose, providing an opportunity to ask questions, and addressing any harm from participation. Debriefing serves an educational function and allows researchers to correct misconceptions. It is an ethical imperative.

After the research is over, the participant should be able to discuss the procedure and the findings with the psychologist. They must be given a general idea of what the researcher was investigating and why, and their part in the research should be explained.

Participants must be told if they have been deceived and given reasons why. They must be asked if they have any questions, which should be answered honestly and as fully as possible.

Debriefing should occur as soon as possible and be as full as possible; experimenters should take reasonable steps to ensure that participants understand debriefing.

“The purpose of debriefing is to remove any misconceptions and anxieties that the participants have about the research and to leave them with a sense of dignity, knowledge, and a perception of time not wasted” (Harris, 1998).

The debriefing aims to provide information and help the participant leave the experimental situation in a similar frame of mind as when he/she entered it (Aronson, 1988).

Exceptions may exist if debriefing seriously compromises study validity or causes harm itself, like negative emotions in children. Consultation with an institutional review board guides exceptions.

Debriefing indicates investigators’ commitment to participant welfare. Harms may not be raised in the debriefing itself, so responsibility continues after data collection. Following up demonstrates respect and protects persons in human subjects research.

Protection of Participants

Researchers must ensure that those participating in research will not be caused distress. They must be protected from physical and mental harm. This means you must not embarrass, frighten, offend or harm participants.

Normally, the risk of harm must be no greater than in ordinary life, i.e., participants should not be exposed to risks greater than or additional to those encountered in their normal lifestyles.

The researcher must also ensure that if vulnerable groups are to be used (elderly, disabled, children, etc.), they must receive special care. For example, if studying children, ensure their participation is brief as they get tired easily and have a limited attention span.

Researchers are not always accurately able to predict the risks of taking part in a study, and in some cases, a therapeutic debriefing may be necessary if participants have become disturbed during the research (as happened to some participants in Zimbardo’s prisoners/guards study ).

Deception research involves purposely misleading participants or withholding information that could influence their participation decision. This method is controversial because it limits informed consent and autonomy, but can provide otherwise unobtainable valuable knowledge.

Types of deception include (i) deliberate misleading, e.g. using confederates, staged manipulations in field settings, deceptive instructions; (ii) deception by omission, e.g., failure to disclose full information about the study, or creating ambiguity.

The researcher should avoid deceiving participants about the nature of the research unless there is no alternative – and even then, this would need to be judged acceptable by an independent expert. However, some types of research cannot be carried out without at least some element of deception.

For example, in Milgram’s study of obedience , the participants thought they were giving electric shocks to a learner when they answered a question wrongly. In reality, no shocks were given, and the learners were confederates of Milgram.

This is sometimes necessary to avoid demand characteristics (i.e., the clues in an experiment that lead participants to think they know what the researcher is looking for).

Another common example is when a stooge or confederate of the experimenter is used (this was the case in both the experiments carried out by Asch ).

According to ethics codes, deception must have strong scientific justification, and non-deceptive alternatives should not be feasible. Deception that causes significant harm is prohibited. Investigators should carefully weigh whether deception is necessary and ethical for their research.

However, participants must be deceived as little as possible, and any deception must not cause distress.  Researchers can determine whether participants are likely distressed when deception is disclosed by consulting culturally relevant groups.

Participants should immediately be informed of the deception without compromising the study’s integrity. Reactions to learning of deception can range from understanding to anger. Debriefing should explain the scientific rationale and social benefits to minimize negative reactions.

If the participant is likely to object or be distressed once they discover the true nature of the research at debriefing, then the study is unacceptable.

If you have gained participants’ informed consent by deception, then they will have agreed to take part without actually knowing what they were consenting to.  The true nature of the research should be revealed at the earliest possible opportunity or at least during debriefing.

Some researchers argue that deception can never be justified and object to this practice as it (i) violates an individual’s right to choose to participate; (ii) is a questionable basis on which to build a discipline; and (iii) leads to distrust of psychology in the community.


Protecting participant confidentiality is an ethical imperative that demonstrates respect, ensures honest participation, and prevents harms like embarrassment or legal issues. Methods like data encryption, coding systems, and secure storage should match the research methodology.

Participants and the data gained from them must be kept anonymous unless they give their full consent.  No names must be used in a lab report .

Researchers must clearly describe to participants the limits of confidentiality and methods to protect privacy. With internet research, threats exist like third-party data access; security measures like encryption should be explained. For non-internet research, other protections should be noted too, like coding systems and restricted data access.

High-profile data breaches have eroded public trust. Methods that minimize identifiable information can further guard confidentiality. For example, researchers can consider whether birthdates are necessary or just ages.

Generally, reducing personal details collected and limiting accessibility safeguards participants. Following strong confidentiality protections demonstrates respect for persons in human subjects research.

What do we do if we discover something that should be disclosed (e.g., a criminal act)? Researchers have no legal obligation to disclose criminal acts and must determine the most important consideration: their duty to the participant vs. their duty to the wider community.

Ultimately, decisions to disclose information must be set in the context of the research aims.

Withdrawal from an Investigation

Participants should be able to leave a study anytime if they feel uncomfortable. They should also be allowed to withdraw their data. They should be told at the start of the study that they have the right to withdraw.

They should not have pressure placed upon them to continue if they do not want to (a guideline flouted in Milgram’s research).

Participants may feel they shouldn’t withdraw as this may ‘spoil’ the study. Many participants are paid or receive course credits; they may worry they won’t get this if they withdraw.

Even at the end of the study, the participant has a final opportunity to withdraw the data they have provided for the research.

Ethical Issues in Psychology & Socially Sensitive Research

There has been an assumption over the years by many psychologists that provided they follow the BPS or APA guidelines when using human participants and that all leave in a similar state of mind to how they turned up, not having been deceived or humiliated, given a debrief, and not having had their confidentiality breached, that there are no ethical concerns with their research.

But consider the following examples:

a) Caughy et al. 1994 found that middle-class children in daycare at an early age generally score less on cognitive tests than children from similar families reared in the home.

Assuming all guidelines were followed, neither the parents nor the children participating would have been unduly affected by this research. Nobody would have been deceived, consent would have been obtained, and no harm would have been caused.

However, consider the wider implications of this study when the results are published, particularly for parents of middle-class infants who are considering placing their young children in daycare or those who recently have!

b)  IQ tests administered to black Americans show that they typically score 15 points below the average white score.

When black Americans are given these tests, they presumably complete them willingly and are not harmed as individuals. However, when published, findings of this sort seek to reinforce racial stereotypes and are used to discriminate against the black population in the job market, etc.

Sieber & Stanley (1988) (the main names for Socially Sensitive Research (SSR) outline 4 groups that may be affected by psychological research: It is the first group of people that we are most concerned with!
  • Members of the social group being studied, such as racial or ethnic group. For example, early research on IQ was used to discriminate against US Blacks.
  • Friends and relatives of those participating in the study, particularly in case studies, where individuals may become famous or infamous. Cases that spring to mind would include Genie’s mother.
  • The research team. There are examples of researchers being intimidated because of the line of research they are in.
  • The institution in which the research is conducted.
salso suggest there are 4 main ethical concerns when conducting SSR:
  • The research question or hypothesis.
  • The treatment of individual participants.
  • The institutional context.
  • How the findings of the research are interpreted and applied.

Ethical Guidelines For Carrying Out SSR

Sieber and Stanley suggest the following ethical guidelines for carrying out SSR. There is some overlap between these and research on human participants in general.

Privacy : This refers to people rather than data. Asking people questions of a personal nature (e.g., about sexuality) could offend.

Confidentiality: This refers to data. Information (e.g., about H.I.V. status) leaked to others may affect the participant’s life.

Sound & valid methodology : This is even more vital when the research topic is socially sensitive. Academics can detect flaws in methods, but the lay public and the media often don’t.

When research findings are publicized, people are likely to consider them fact, and policies may be based on them. Examples are Bowlby’s maternal deprivation studies and intelligence testing.

Deception : Causing the wider public to believe something, which isn’t true by the findings, you report (e.g., that parents are responsible for how their children turn out).

Informed consent : Participants should be made aware of how participating in the research may affect them.

Justice & equitable treatment : Examples of unjust treatment are (i) publicizing an idea, which creates a prejudice against a group, & (ii) withholding a treatment, which you believe is beneficial, from some participants so that you can use them as controls.

Scientific freedom : Science should not be censored, but there should be some monitoring of sensitive research. The researcher should weigh their responsibilities against their rights to do the research.

Ownership of data : When research findings could be used to make social policies, which affect people’s lives, should they be publicly accessible? Sometimes, a party commissions research with their interests in mind (e.g., an industry, an advertising agency, a political party, or the military).

Some people argue that scientists should be compelled to disclose their results so that other scientists can re-analyze them. If this had happened in Burt’s day, there might not have been such widespread belief in the genetic transmission of intelligence. George Miller (Miller’s Magic 7) famously argued that we should give psychology away.

The values of social scientists : Psychologists can be divided into two main groups: those who advocate a humanistic approach (individuals are important and worthy of study, quality of life is important, intuition is useful) and those advocating a scientific approach (rigorous methodology, objective data).

The researcher’s values may conflict with those of the participant/institution. For example, if someone with a scientific approach was evaluating a counseling technique based on a humanistic approach, they would judge it on criteria that those giving & receiving the therapy may not consider important.

Cost/benefit analysis : It is unethical if the costs outweigh the potential/actual benefits. However, it isn’t easy to assess costs & benefits accurately & the participants themselves rarely benefit from research.

Sieber & Stanley advise that researchers should not avoid researching socially sensitive issues. Scientists have a responsibility to society to find useful knowledge.

  • They need to take more care over consent, debriefing, etc. when the issue is sensitive.
  • They should be aware of how their findings may be interpreted & used by others.
  • They should make explicit the assumptions underlying their research so that the public can consider whether they agree with these.
  • They should make the limitations of their research explicit (e.g., ‘the study was only carried out on white middle-class American male students,’ ‘the study is based on questionnaire data, which may be inaccurate,’ etc.
  • They should be careful how they communicate with the media and policymakers.
  • They should be aware of the balance between their obligations to participants and those to society (e.g. if the participant tells them something which they feel they should tell the police/social services).
  • They should be aware of their own values and biases and those of the participants.

Arguments for SSR

  • Psychologists have devised methods to resolve the issues raised.
  • SSR is the most scrutinized research in psychology. Ethical committees reject more SSR than any other form of research.
  • By gaining a better understanding of issues such as gender, race, and sexuality, we are able to gain greater acceptance and reduce prejudice.
  • SSR has been of benefit to society, for example, EWT. This has made us aware that EWT can be flawed and should not be used without corroboration. It has also made us aware that the EWT of children is every bit as reliable as that of adults.
  • Most research is still on white middle-class Americans (about 90% of research is quoted in texts!). SSR is helping to redress the balance and make us more aware of other cultures and outlooks.

Arguments against SSR

  • Flawed research has been used to dictate social policy and put certain groups at a disadvantage.
  • Research has been used to discriminate against groups in society, such as the sterilization of people in the USA between 1910 and 1920 because they were of low intelligence, criminal, or suffered from psychological illness.
  • The guidelines used by psychologists to control SSR lack power and, as a result, are unable to prevent indefensible research from being carried out.

American Psychological Association. (2002). American Psychological Association ethical principles of psychologists and code of conduct. www.apa.org/ethics/code2002.html

Baumrind, D. (1964). Some thoughts on ethics of research: After reading Milgram’s” Behavioral study of obedience.”.  American Psychologist ,  19 (6), 421.

Caughy, M. O. B., DiPietro, J. A., & Strobino, D. M. (1994). Day‐care participation as a protective factor in the cognitive development of low‐income children.  Child development ,  65 (2), 457-471.

Harris, B. (1988). Key words: A history of debriefing in social psychology. In J. Morawski (Ed.), The rise of experimentation in American psychology (pp. 188-212). New York: Oxford University Press.

Rosenthal, R., & Rosnow, R. L. (1984). Applying Hamlet’s question to the ethical conduct of research: A conceptual addendum. American Psychologist, 39(5) , 561.

Sieber, J. E., & Stanley, B. (1988). Ethical and professional dimensions of socially sensitive research.  American psychologist ,  43 (1), 49.

The British Psychological Society. (2010). Code of Human Research Ethics. www.bps.org.uk/sites/default/files/documents/code_of_human_research_ethics.pdf

Further Information

  • MIT Psychology Ethics Lecture Slides

BPS Documents

  • Code of Ethics and Conduct (2018)
  • Good Practice Guidelines for the Conduct of Psychological Research within the NHS
  • Guidelines for Psychologists Working with Animals
  • Guidelines for ethical practice in psychological research online

APA Documents

APA Ethical Principles of Psychologists and Code of Conduct

Print Friendly, PDF & Email


How the Classics Changed Research Ethics

Some of history’s most controversial psychology studies helped drive extensive protections for human research participants. some say those reforms went too far..

  • Behavioral Research
  • Institutional Review Board (IRB)

research ethics in study

Photo above: In 1971, APS Fellow Philip Zimbardo halted his classic prison simulation at Stanford after volunteer “guards” became abusive to the “prisoners,” famously leading one prisoner into a fit of sobbing. Photo credit:   PrisonExp.org

Nearly 60 years have passed since Stanley Milgram’s infamous “shock box” study sparked an international focus on ethics in psychological research. Countless historians and psychology instructors assert that Milgram’s experiments—along with studies like the Robbers Cave and Stanford prison experiments—could never occur today; ethics gatekeepers would swiftly bar such studies from proceeding, recognizing the potential harms to the participants. 

But the reforms that followed some of the 20th century’s most alarming biomedical and behavioral studies have overreached, many social and behavioral scientists complain. Studies that pose no peril to participants confront the same standards as experimental drug treatments or surgeries, they contend. The institutional review boards (IRBs) charged with protecting research participants fail to understand minimal risk, they say. Researchers complain they waste time addressing IRB concerns that have nothing to do with participant safety. 

Several factors contribute to this conflict, ethicists say. Researchers and IRBs operate in a climate of misunderstanding, confusing regulations, and a systemic lack of ethics training, said APS Fellow Celia Fisher, a Fordham University professor and research ethicist, in an interview with the Observer . 

“In my view, IRBs are trying to do their best and investigators are trying to do their best,” Fisher said. “It’s more that we really have to enhance communication and training on both sides.” 

‘Sins’ from the past  

Modern human-subjects protections date back to the 1947 Nuremberg Code, the response to Nazi medical experiments on concentration-camp internees. Those ethical principles, which no nation or organization has officially accepted as law or official ethics guidelines, emphasized that a study’s benefits should outweigh the risks and that human subjects should be fully informed about the research and participate voluntarily.  

See the 2014 Observer cover story by APS Fellow Carol A. Tavris, “ Teaching Contentious Classics ,” for more about these controversial studies and how to discuss them with students.

But the discovery of U.S.-government-sponsored research abuses, including the Tuskegee syphilis experiment on African American men and radiation experiments on humans, accelerated regulatory initiatives. The abuses investigators uncovered in the 1970s, 80s, and 90s—decades after the experiments had occurred—heightened policymakers’ concerns “about what else might still be going on,” George Mason University historian Zachary M. Schrag explained in an interview. These concerns generated restrictions not only on biomedical research but on social and behavioral studies that pose a minute risk of harm.  

“The sins of researchers from the 1940s led to new regulations in the 1990s, even though it was not at all clear that those kinds of activities were still going on in any way,” said Schrag, who chronicled the rise of IRBs in his book  Ethical Imperialism: Institutional Review Boards and the Social Sciences, 1965–2009.  

Accompanying the medical research scandals were controversial psychological studies that provided fodder for textbooks, historical tomes, and movies.  

  • In the early 1950s, social psychologist Muzafer Sherif and his colleagues used a Boy Scout camp called Robbers Cave to study intergroup hostility. They randomly assigned preadolescent boys to one of two groups and concocted a series of competitive activities that quickly sparked conflict. They later set up a situation that compelled the boys to overcome their differences and work together. The study provided insights into prejudice and conflict resolution but generated criticism because the children weren’t told they were part of an experiment. 
  • In 1961, Milgram began his studies on obedience to authority by directing participants to administer increasing levels of electric shock to another person (a confederate). To Milgram’s surprise, more than 65% of the participants delivered the full voltage of shock (which unbeknownst to them was fake), even though many were distressed about doing so. Milgram was widely criticized for the manipulation and deception he employed to carry out his experiments. 
  • In 1971, APS Fellow Philip Zimbardo halted his classic prison simulation at Stanford after volunteer “guards” became abusive to the “prisoners,” famously leading one prisoner into a fit of sobbing. 

Western policymakers created a variety of safeguards in the wake of these psychological studies and other medical research. Among them was the Declaration of Helsinki, an ethical guide for human-subjects research developed by the Europe-based World Medical Association. The U.S. Congress passed the National Research Act of 1974, which created a commission to oversee participant protections in biomedical and behavioral research. And in the 90s, federal agencies adopted the Federal Policy for the Protection of Human Subjects (better known as the Common Rule), a code of ethics applied to any government-funded research. IRBs review studies through the lens of the Common Rule. After that, social science research, including studies in social psychology, anthropology, sociology, and political science, began facing widespread institutional review (Schrag, 2010).  

Sailing Through Review

Psychological scientists and other researchers who have served on institutional review boards provide these tips to help researchers get their studies reviewed swiftly.  

  • Determine whether your study qualifies for minimal-risk exemption from review. Online tools are even in development to help researchers self-determine exempt status (Ben-Shahar, 2019; Schneider & McClutcheon, 2018). 
  • If you’re not clear about your exemption, research the regulations to understand how they apply to your planned study. Show you’ve done your homework and have developed a protocol that is safe for your participants.  
  • Consult with stakeholders. Look for advocacy groups and representatives from the population you plan to study. Ask them what they regard as fair compensation for participation. Get their feedback about your questionnaires and consent forms to make sure they’re understandable. These steps help you better show your IRB that the population you’re studying will find the protections adequate (Fisher, 2022). 
  • Speak to IRB members or staff before submitting the protocol. Ask them their specific concerns about your study, and get guidance on writing up the protocol to address those concerns. Also ask them about expected turnaround times so you can plan your submission in time to meet any deadlines associated with your study (e.g., grant application deadlines).  

Ben-Shahar, O. (2019, December 2). Reforming the IRB in experimental fashion. The Regulatory Review . University of Pennsylvania. https://www.theregreview.org/2019/12/02/ben-shahar-reforming-irb-experimental-fashion/  

Fisher, C. B. (2022). Decoding the ethics code: A practical guide for psychologists (5 th ed.). Sage Publications. 

Schneider, S. L. & McCutcheon, J. A. (2018).  Proof of concept: Use of a wizard for self-determination of IRB exempt status . Federal Demonstration Partnership.   http://thefdp.org/default/assets/File/Documents/wizard_pilot_final_rpt.pdf  

Social scientists have long contended that the Common Rule was largely designed to protect participants in biomedical experiments—where scientists face the risk of inducing physical harm on subjects—but fits poorly with the other disciplines that fall within its reach.

“It’s not like the IRBs are trying to hinder research. It’s just that regulations continue to be written in the medical model without any specificity for social science research,” she explained. 

The Common Rule was updated in 2018 to ease the level of institutional review for low-risk research techniques (e.g., surveys, educational tests, interviews) that are frequent tools in social and behavioral studies. A special committee of the National Research Council (NRC), chaired by APS Past President Susan Fiske, recommended many of those modifications. Fisher was involved in the NRC committee, along with APS Fellows Richard Nisbett (University of Michigan) and Felice J. Levine (American Educational Research Association), and clinical psychologist Melissa Abraham of Harvard University. But the Common Rule reforms have yet to fully expedite much of the research, partly because the review boards remain confused about exempt categories, Fisher said.  

Interference or support?  

That regulatory confusion has generated sour sentiments toward IRBs. For decades, many social and behavioral scientists have complained that IRBs effectively impede scientific progress through arbitrary questions and objections. 

In a Perspectives on Psychological Science  paper they co-authored, APS Fellows Stephen Ceci of Cornell University and Maggie Bruck of Johns Hopkins University discussed an IRB rejection of their plans for a study with 6- to 10-year-old participants. Ceci and Bruck planned to show the children videos depicting a fictional police officer engaging in suggestive questioning of a child.  

“The IRB refused to approve the proposal because it was deemed unethical to show children public servants in a negative light,” they wrote, adding that the IRB held firm on its rejection despite government funders already having approved the study protocol (Ceci & Bruck, 2009).   

Other scientists have complained the IRBs exceed their Common Rule authority by requiring review of studies that are not government funded. In 2011, psychological scientist Jin Li sued Brown University in federal court for barring her from using data she collected in a privately funded study on educational testing. Brown’s IRB objected to the fact that she paid her participants different amounts of compensation based on need. (A year later, the university settled the case with Li.) 

In addition, IRBs often hover over minor aspects of a study that have no genuine relation to participant welfare, Ceci said in an email interview.  

“You can have IRB approval and later decide to make a nominal change to the protocol (a frequent one is to add a new assistant to the project or to increase the sample size),” he wrote. “It can take over a month to get approval. In the meantime, nothing can move forward and the students sit around waiting.” 

Not all researchers view institutional review as a roadblock. Psychological scientist Nathaniel Herr, who runs American University’s Interpersonal Emotion Lab and has served on the school’s IRB, says the board effectively collaborated with researchers to ensure the study designs were safe and that participant privacy was appropriately protected 

“If the IRB that I operated on saw an issue, they shared suggestions we could make to overcome that issue,” Herr said. “It was about making the research go forward. I never saw a project get shut down. It might have required a significant change, but it was often about confidentiality and it’s something that helps everybody feel better about the fact we weren’t abusing our privilege as researchers. I really believe it [the review process] makes the projects better.” 

Some universities—including Fordham University, Yale University, and The University of Chicago—even have social and behavioral research IRBs whose members include experts optimally equipped to judge the safety of a psychological study, Fisher noted. 

Training gaps  

Institutional review is beset by a lack of ethics training in research programs, Fisher believes. While students in professional psychology programs take accreditation-required ethics courses in their doctoral programs, psychologists in other fields have no such requirement. In these programs, ethics training is often limited to an online program that provides, at best, a perfunctory overview of federal regulations. 

“It gives you the fundamental information, but it has nothing to do with our real-world deliberations about protecting participants,” she said. 

Additionally, harm to a participant is difficult to predict. As sociologist Martin Tolich of University of Otago in New Zealand wrote, the Stanford prison study had been IRB-approved. 

“Prediction of harm with any certainty is not necessarily possible, and should not be the aim of ethics review,” he argued. “A more measured goal is the minimization of risk, not its eradication” (Tolich, 2014). 

Fisher notes that scientists aren’t trained to recognize and respond to adverse events when they occur during a study. 

“To be trained in research ethics requires not just knowing you have to obtain informed consent,” she said. “It’s being able to apply ethical reasoning to each unique situation. If you don’t have the training to do that, then of course you’re just following the IRB rules, which are very impersonal and really out of sync with the true nature of what we’re doing.” 

Researchers also raise concerns that, in many cases, the regulatory process harms vulnerable populations rather than safeguards them. Fisher and psychological scientist Brian Mustanski of University of Illinois at Chicago wrote in 2016, for example, that the review panels may be hindering HIV prevention strategies by requiring researchers to get parental consent before including gay and bisexual adolescents in their studies. Under that requirement, youth who are not out to their families get excluded. Boards apply those restrictions even in states permitting minors to get HIV testing and preventive medication without parental permission—and even though federal rules allow IRBs to waive parental consent in research settings (Mustanski & Fisher, 2016) 

IRBs also place counterproductive safety limits on suicide and self-harm research, watching for any sign that a participant might need to be removed from a clinical study and hospitalized. 

“The problem is we know that hospitalization is not the panacea,” Fisher said. “It stops suicidality for the moment, but actually the highest-risk period is 3 months after the first hospitalization for a suicide attempt. Some of the IRBs fail to consider that a non-hospitalization intervention that’s being tested is just as safe as hospitalization. It’s a difficult problem, and I don’t blame them. But if we have to take people out of a study as soon as they reach a certain level of suicidality, then we’ll never find effective treatment.” 

Communication gaps  

Supporters of the institutional review process say researchers tend to approach the IRB process too defensively, overlooking the board’s good intentions.  

“Obtaining clarification or requesting further materials serve to verify that protections are in place,” a team of institutional reviewers wrote in an editorial for  Psi Chi Journal of Psychological Research . “If researchers assume that IRBs are collaborators in the research process, then these requests can be seen as prompts rather than as admonitions” (Domenech Rodriguez et al., 2017). 

Fisher agrees that researchers’ attitudes play a considerable role in the conflicts that arise over ethics review. She recommends researchers develop each protocol with review-board questions in mind (see sidebar). 

“For many researchers, there’s a disdain for IRBs,” she said. “IRBs are trying their hardest. They don’t want to reject research. It’s just that they’re not informed. And sometimes if behavioral scientists or social scientists are disdainful of their IRBs, they’re not communicating with them.” 

Some researchers are building evidence to help IRBs understand the level of risk associated with certain types of psychological studies.  

  • In a study involving more than 500 undergraduate students, for example, psychological scientists at the University of New Mexico found that the participants were less upset than expected by questionnaires about sex, trauma , and other sensitive topics. This finding, the researchers reported in  Psychological Science , challenges the usual IRB assumption about the stress that surveys on sex and trauma might inflict on participants (Yeater et al., 2012). 
  • A study involving undergraduate women indicated that participants who had experienced child abuse , although more likely than their peers to report distress from recalling the past as part of a study, were also more likely to say that their involvement in the research helped them gain insight into themselves and hoped it would help others (Decker et al., 2011). 
  • A multidisciplinary team, including APS Fellow R. Michael Furr of Wake Forest University, found that adolescent psychiatric patients showed a drop in suicide ideation after being questioned regularly about their suicidal thoughts over the course of 2 years. This countered concerns that asking about suicidal ideation would trigger an increase in such thinking (Mathias et al., 2012). 
  • A meta-analysis of more than 70 participant samples—totaling nearly 74,000 individuals—indicated that people may experience only moderate distress when discussing past traumas in research studies. They also generally might find their participation to be a positive experience, according to the findings (Jaffe et al., 2015). 

The takeaways  

So, are the historians correct? Would any of these classic experiments survive IRB scrutiny today? 

Reexaminations of those studies make the question arguably moot. Recent revelations about some of these studies suggest that scientific integrity concerns may taint the legacy of those findings as much as their impact on participants did (Le Texier, 2019, Resnick, 2018; Perry, 2018).  

Also, not every aspect of the controversial classics is taboo in today’s regulatory environment. Scientists have won IRB approval to conceptually replicate both the Milgram and Stanford prison experiments (Burger, 2009; Reicher & Haslam, 2006). They simply modified the protocols to avert any potential harm to the participants. (Scholars, including Zimbardo himself, have questioned the robustness of those replication findings [Elms, 2009; Miller, 2009; Zimbardo, 2006].) 

Many scholars believe there are clear and valuable lessons from the classic experiments. Milgram’s work, for instance, can inject clarity into pressing societal issues such as political polarization and police brutality . Ethics training and monitoring simply need to include those lessons learned, they say. 

“We should absolutely be talking about what Milgram did right, what he did wrong,” Schrag said. “We can talk about what we can learn from that experience and how we might answer important questions while respecting the rights of volunteers who participate in psychological experiments.”  

Feedback on this article? Email  [email protected]  or login to comment.


Burger, J. M. (2009). Replicating Milgram: Would people still obey today? American Psychologist , 64 (1), 1–11. https://doi.org/10.1037/a0010932  

Ceci, S. J. & Bruck, M. (2009). Do IRBs pass the minimal harm test? Perspectives on Psychological Science , 4 (1), 28–29. https://doi.org/10.1111/j.1745-6924.2009.01084.x   

Decker, S. E., Naugle, A. E., Carter-Visscher, R., Bell, K., & Seifer, A. (2011). Ethical issues in research on sensitive topics: Participants’ experiences of stress and benefit . Journal of Empirical Research on Human Research Ethics: An International Journal , 6 (3), 55–64. https://doi.org/10.1525/jer.2011.6.3.55  

Domenech Rodriguez, M. M., Corralejo, S. M., Vouvalis, N., & Mirly, A. K. (2017). Institutional review board: Ally not adversary. Psi Chi Journal of Psychological Research , 22 (2), 76–84.  https://doi.org/10.24839/2325-7342.JN22.2.76  

Elms, A. C. (2009). Obedience lite. American Psychologist , 64 (1), 32–36.  https://doi.org/10.1037/a0014473

Fisher, C. B., True, G., Alexander, L., & Fried, A. L. (2009). Measures of mentoring, department climate, and graduate student preparedness in the responsible conduct of psychological research. Ethics & Behavior , 19 (3), 227–252. https://doi.org/10.1080/10508420902886726  

Jaffe, A. E., DiLillo, D., Hoffman, L., Haikalis, M., & Dykstra, R. E. (2015). Does it hurt to ask? A meta-analysis of participant reactions to trauma research. Clinical Psychology Review , 40 , 40–56. https://doi.org/10.1016/j.cpr.2015.05.004  

Le Texier, T. (2019). Debunking the Stanford Prison experiment. American Psychologist , 74 (7), 823–839. http://dx.doi.org/10.1037/amp0000401  

Mathias, C. W., Furr, R. M., Sheftall, A. H., Hill-Kapturczak, N., Crum, P., & Dougherty, D. M. (2012). What’s the harm in asking about suicide ideation? Suicide and Life-Threatening Behavior , 42 (3), 341–351. https://doi.org/10.1111/j.1943-278X.2012.0095.x  

Miller, A. G. (2009). Reflections on “Replicating Milgram” (Burger, 2009). American Psychologist , 64 (1), 20–27. https://doi.org/10.1037/a0014407  

Mustanski, B., & Fisher, C. B. (2016). HIV rates are increasing in gay/bisexual teens: IRB barriers to research must be resolved to bend the curve.  American Journal of Preventive Medicine ,  51 (2), 249–252. https://doi.org/10.1016/j.amepre.2016.02.026  

Perry, G. (2018). The lost boys: Inside Muzafer Sherif’s Robbers Cave experiment. Scribe Publications.  

Reicher, S. & Haslam, S. A. (2006). Rethinking the psychology of tyranny: The BBC prison study. British Journal of Social Psychology , 45 , 1–40. https://doi.org/10.1348/014466605X48998  

Resnick, B. (2018, June 13). The Stanford prison experiment was massively influential. We just learned it was a fraud. Vox. https://www.vox.com/2018/6/13/17449118/stanford-prison-experiment-fraud-psychology-replication  

Schrag, Z. M. (2010). Ethical imperialism: Institutional review boards and the social sciences, 1965–2009 . Johns Hopkins University Press. 

Tolich, M. (2014). What can Milgram and Zimbardo teach ethics committees and qualitative researchers about minimal harm? Research Ethics , 10 (2), 86–96. https://doi.org/10.1177/1747016114523771  

Yeater, E., Miller, G., Rinehart, J., & Nason, E. (2012). Trauma and sex surveys meet minimal risk standards: Implications for institutional review boards.  Psychological Science , 23 (7), 780–787. https://doi.org/10.1177/0956797611435131  

Zimbardo, P. G. (2006). On rethinking the psychology of tyranny: The BBC prison study. British Journal of Social Psychology , 45 , 47–53. https://doi.org/10.1348/014466605X81720  

APS regularly opens certain online articles for discussion on our website. Effective February 2021, you must be a logged-in APS member to post comments. By posting a comment, you agree to our Community Guidelines and the display of your profile information, including your name and affiliation. Any opinions, findings, conclusions, or recommendations present in article comments are those of the writers and do not necessarily reflect the views of APS or the article’s author. For more information, please see our Community Guidelines .

Please login with your APS account to comment.

About the Author

Scott Sleek is a freelance writer in Silver Spring, Maryland, and the former director of news and information at APS.

research ethics in study

Inside Grants: National Science Foundation Research Data Ecosystems (RDE)

The National Science Foundation’s Research Data Ecosystems (RDE) is a $38 million effort to improve scientific research through modernized data management collection.

research ethics in study

Up-and-Coming Voices: Methodology and Research Practices

Talks by students and early-career researchers related to methodology and research practices.

research ethics in study

Understanding ‘Scientific Consensus’ May Correct Misperceptions About GMOs, but Not Climate Change

Explaining the meaning of “scientific consensus” may be more effective at countering some types of misinformation than others.

Privacy Overview

Ethical AI governance: mapping a research ecosystem

  • Original Research
  • Open access
  • Published: 14 February 2024

Cite this article

You have full access to this open access article

  • Simon Knight   ORCID: orcid.org/0000-0002-8709-5780 1 , 2 ,
  • Antonette Shibani 1 &
  • Nicole Vincent 1  

How do we assess the positive and negative impacts of research about- or research that employs artificial intelligence (AI), and how adequate are existing research governance frameworks for these ends? That concern has seen significant recent attention, with various calls for change, and a plethora of emerging guideline documents across sectors. However, it is not clear what kinds of issues are expressed in research ethics with or on AI at present, nor how resources are drawn on in this process to support the navigation of ethical issues. Research Ethics Committees (RECs) have a well-established history in ethics governance, but there have been concerns about their capacity to adequately govern AI research. However, no study to date has examined the ways that AI-related projects engage with the ethics ecosystem, or its adequacy for this context. This paper analysed a single institution’s ethics applications for research related to AI, applying a socio-material lens to their analysis. Our novel methodology provides an approach to understanding ethics ecosystems across institutions. Our results suggest that existing REC models can effectively support consideration of ethical issues in AI research, we thus propose that any new materials should be embedded in this existing well-established ecosystem.

Avoid common mistakes on your manuscript.

1 Introduction

How do we assess the positive and negative impacts of research about- or research that employs artificial intelligence (AI) Footnote 1 ? This is a pressing question, with ambiguities around the role of researchers, governance bodies, and those who will use or/and be impacted by technologies, across both academic and industry contexts. While significant work has been undertaken to describe ethical challenges of AI, and develop guidance and principles to guide practice, there remains concern regarding the governance of AI research, the gap between principles and practice, and the participation of stakeholders in deciding how AI may be used about, with, for, and on them.

This paper engages with this challenge through the analysis of existing research governance material, by investigating the resources that researchers and research ethics committees (RECs) draw upon in articulating and navigating ethical issues arising out of AI-related research. Resources, such as formalised ethical principles and articulated processes, inscribe knowledge. As such, they act as reflections of knowledge and practices, while also shaping that practice through their conceptual and normative (or regulatory) impact on actors. Beyond this, knowing what resources researchers and RECs actually draw upon can provide important insights into existing knowledge and practices, as well as shape future practice. Such knowledge can also help ascertain whether the hype of AI ethics deserves the attention it’s getting, by illuminating whether—and, if so, how—the numerous published AI Ethics principles are actually used, which in turn will help to appraise the utility of such publications. In doing this we aim to contribute to understanding the ethical issues that AI-related research gives rise to and how learning about these might be (or could be) taking place. Through our socio-material analysis of materials relating to AI from a single institution’s ethics committee process, we address this concern, exploring how these materials provide a lens onto and reflection of the ethical concerns of AI research.

2 Literature review

2.1 ethical principles and practices.

To foster ethical action in the developing areas around use of AI and data, a wide range of guidance and sets of principles have been developed. A recent review identified 84 sets of AI ethics guidelines globally with 11 themes among them [ 45 ], while another review of 36 principles documents identified that consensus could be seen across eight common themes [ 32 ]. A third review of only research studies regarding ethical principles identified 27 such studies with 22 principles [ 49 ]. Finally, a fourth review of public, private, and non-governmental organization (NGO) documents providing AI guidance identified 112 such documents [ 75 ]. Significantly, this last review identified significant differences in the focus of documents produced by different stakeholders, and their production, with NGOs and public organisations covering more topics, and being more likely to have engaged participatory approaches in their development. Nevertheless, across these reviews, identified principles overlap significantly with the classic Belmont principles.

Moreover, there have been various calls to move from a focus on developing AI ethics principles, to instantiating them in practice and organisational structures to support practical ethics [ 50 , 60 , 72 , 89 , 91 ]. These calls emphasise the significance of micro-ethics or ‘ethics-in-action’, and a shift from procedural to situational ethics [ 33 , 38 , 43 , 51 , 64 ]. This shift, particularly as expressed by [ 38 ] reflects both that when we apply procedural ethics we are engaged in practices, and that this process of translation is not mechanical and requires interpretation. This is a concern of recent AI work, reflecting that ethics is fundamentally imbued with action, and ongoing interactions in design processes, in ways hard to capture in procedural ethics. In addition, recent calls have highlighted the importance of analysis of ethical issues of technologies in terms of both immediate or direct impacts (hard impacts), and long-term or indirect impacts (soft impacts), that may affect people’s lives [ 80 ].

2.2 The role of research ethics committees

Beyond the myriad of AI ethics principles, the role of governance structures in oversight of novel applications of AI has received attention, beyond the governance pages of companies and universities, in popular media coverage (e.g. [ 13 , 42 , 52 , 53 ]. These structures—in the form of Research Ethics Committees (RECs) and Institutional Review Boards (IRBs) — play a crucial role in university research internationally, with mounting pressure to create similar bodies in companies, and a recognition of the challenges such bodies face. In this paper, we will use RECs as a general term that includes medical MRECs, human HRECs, and IRBs, except where explicitly stated otherwise.

RECs are typically comprised of multi-disciplinary research experts, alongside non-research members—in some systems, including lay people—who oversee and review research that involves human participants. Footnote 2 Researchers who wish to undertake such work typically submit an application that explains what the research will involve, and how it will address key ethical principles including the Belmont principles of respect for persons (or autonomy), beneficence, and justice [ 66 ]. The role of RECs is to assure these principles are instantiated in research that is approved, and to provide feedback to researchers [ 44 ]. RECs do this by assessing materials submitted by researchers. However, due to the formalised process of this work, tensions have emerged regarding the bureaucratisation of research and control by RECs with perceptions that RECs are particularly suited to work in a bio-medical model [ 5 ], although some of these concerns may relate to local—changeable—practices rather than underlying theoretical issues [ 40 ].

2.2.1 International context of RECs

Significantly, the requirements and remit of RECs vary internationally. While research ethics systems share much common history, a number of publications have investigated similarities and variations across international ethics standards and committees [ 37 , 44 , 88 ] and common emerging themes—including that of data and AI [ 88 ].

However, there are also more or less nuanced differences in expression and execution of REC processes internationally. A particularly salient example, given that US experiences are often universalised, is found in the United States IRB guidance, which explicitly directs members as follows:

§ 46.111 Criteria for IRB approval of research : (2) Risks to subjects are reasonable in relation to anticipated benefits, if any, to subjects, and the importance of the knowledge that may reasonably be expected to result. In evaluating risks and benefits, the IRB should consider only those risks and benefits that may result from the research (as distinguished from risks and benefits of therapies subjects would receive even if not participating in the research). The IRB should not consider possible long-range effects of applying knowledge gained in the research (e.g. the possible effects of the research on public policy) as among those research risks that fall within the purview of its responsibility. [ 82 ].

In considering this quote as exemplifying the direction given in the wider document, and in comparison to international documents, we can note that IRBs receive unclear advice regarding their role in assessing merit and integrity (a key principle in the Australian model [ 70 ]), and that they must navigate the directions to (1) balance risks and benefits, with those to (2) “not consider” long-range effects [ 9 ]. Moreover, common across RECs is that they provide ‘point-in-time’ governance, but monitor and evaluate largely via periodic self-report of applicants or a complaints-based system for participants [ 20 ]. This lack of long-view consideration may create an ethical debt, through which technologies are developed without adequate consideration of long-term impacts [ 68 ].

This specific statement has thus been highlighted as a key feature in considering the adequacy of the US IRB model for AI research [ 25 ]. Importantly, we should be cautious in universalising models of ethics given cultural and contextual variation in values and practices, and in capacity and procedures for ethics governance such that universal expectations of REC review may exclude researchers from countries where no such review is available [ 55 , 90 ]. Further, complications stem from disciplinary differences, e.g. [ 18 ], and conflicts between REC guidelines and ethical norms of communities with whom research is conducted [ 21 ]. Crucially, differences in disciplinary, cultural, and other contextual features must be recognised. In places, this recognition and negotiation is a crucial part of ethical practice, because we should expect values to be contextual. In other contexts, there may be variation presently that rests on a lack of clear standards or articulation of possible norms to which we might work; expressing this is important.

2.2.2 AI and data as challenges to research ethics committees

Even outside the context of AI, concerns have been raised about what we know regarding researchers’ level of understanding of research ethics [ 10 ], and correspondingly, regarding expectations around interaction between committee members and disciplinary experts and their respective expertise [ 26 ]. However, as Hickey et al., [ 40 ] highlight, although there are various criticisms of ethics review boards, disentangling issues of practice in particular institutional committees, from more fundamental concerns with their underpinning principles, is challenging. They suggest that the criticisms of RECs can be addressed through fostering positive learning-oriented ethical review processes, adopting practices of open communication, outreach to research communities, and engagement with disciplinary expertise [ 40 ]. These suggestions are echoed by Brown et al.’s [ 17 ] specific analysis of education researchers’ views in the UK, finding that although there were concerns regarding understanding of the specific methods and issues in educational research, many respondents had positive interactions with their RECs [ 17 ]. Nevertheless, a lingering concern is that RECs act as ‘moral bureaucracies’ via a managerial audit approach to ethics, that is likely to incentivise ‘safe’ practice, and reduce productive rich dialogue regarding ethics, particularly in the context of technology [ 61 ].

Indeed, this sentiment is echoed in a discussion of the strengths and limitations of RECs’ specifically focused on AI and data research [ 29 , 30 , 31 ]. Here, two key concerns regard the nature of research conducted entirely outside university contexts (in which REC systems are mandatory and established) or in collaboration with such industry partners; and where the research involves secondary uses of publicly available datasets [ 35 ]. Footnote 3 Given these concerns, and particularly the potential for unanticipated and unintended consequences, there have been calls to adjust REC’s understanding of research and data [ 28 ]; privacy protection in the context of re-identification (e.g. [ 15 ]; and moves in some UK RECs to give greater attention to data and AI [ 39 ]. An alternative approach has also been piloted in which researchers wishing to access participating funders’ grants undertake a separate review by an ‘Ethics and Society Review (ESR) board’ [ 11 ]. This ESR is constituted of interdisciplinary researchers who review author statements regarding possible impacts and risk mitigation, and provide feedback, with positive initial evaluation of the program. Relatedly, specific guidance has been produced to support industry organisations in establishing ethics bodies [ 74 ].

While such concerns have been highlighted across a number of studies and media articles, in a seminal study that involved interviewing REC members, some of the interviewed participants rejected the need for yet further guidelines, instead calling for “implementable procedures to assess big data projects effectively” [ 31 ], 136). As the authors highlight, differences in requests for specific guidance may lead to differing outcomes from different committees, with potential for negative impact on “researchers’ trust in the oversight system, data sharing practices, and research collaborations” [ 31 ], 138). Crucially, a lack of expertise and experience in assessing big data and AI-related projects was also explicitly recognised by the interviewed REC members [ 31 ], findings supported by further European and US research [ 81 , 86 ]. Footnote 4

2.3 Learning for AI research ethics

Despite apparent gaps in REC systems, as the Future of Privacy Forum report ‘Designing an Artificial Intelligence Research Review Committee’ [ 46 ] sets out, in developing models to adequately address AI research, we can learn from the significant work undertaken in human and animal research, and biosafety committees. In similar work proposing developments in RECs, a 2021 collaboration between the UK’s Ada Lovelace Institute, University of Exeter, and Alan Turing Institute, investigated ‘Supporting AI research ethics committees: Exploring solutions to the unique ethical risks that are emerging in association with data science and AI research’ [ 1 ], and the associated ‘Looking before we leap’ project [ 84 ]. Their report [ 2 ] highlighted six challenges for research ethics committees:

lack of resources and training

mismatch in principles designed for researcher-subject relationships applied to researcher-data subject relationships

lack of established norms regarding principles for use specifically in AI and data research

cross-institutional (and sector) research which can lead to research being assessed by multiple committees

challenges of assessing unexpected impacts

transparency with respect to corporate research ethics groups or involvement of corporate entities in research activities

These challenges, and the context described in the preceding sections, led these researchers to recommend some key foci for RECs in considering AI research (which we revisit in Conclusions), synthesised to indicate their strong parallels in Table  1 .

2.3.1 The need for learning in RECs

These sets of concerns and recommendations are intertwined, each contributing- and being contributed to by others in the list. A lynchpin of these recommendations is learning. This focus involves understanding how RECs, researchers, and stakeholders learn about AI, its impacts, and the systems into which it is deployed and the ethical concerns of those systems. Research is required to understand the processes of this learning, how learning about AI, ethical development and thinking, and systems thinking come together. RECs should have continuous training for staff regarding ethical review processes and their importance, with ongoing development. This applies in university contexts, but notably: “Many corporate RECs we spoke with also place an emphasis on continued skills and training, including providing basic ‘ethical training’ for staff of all levels.” ([ 2 ], 37).

In a powerful move highlighting the significance of learning through editorial policy, the Journal of Empirical Research on Human Ethics includes in its manuscript template an Educational Implications section, intended to discuss the ‘key concepts’ from the article to support teaching to different stakeholders, including research and REC communities, as well as students, and external stakeholders such as participants and the general public [ 47 ]. In Ferretti et al.,’s analysis—published in that journal—this ‘Educational Implications’ section notes the significance of: “knowledge exchange and a more productive engagement among the various factors involved in big data research. These include and are not limited to RECs, researchers, research institutions, private enterprises, citizen science groups, and the public” ([ 31 ], 139), highlighting that this responsibility involves developing skills around both the technology (AI), and ethical processes and values. As they also highlight, the range of actors for whom there are implications for learning extends to “informing society about issues related to big data and the use of AI in research. Starting with this democratic engagement, the general public can clarify their expectations regarding research with big data and thus inform the decisions of other actors involved.” ([ 31 ], 140).

2.3.2 Beyond principles

The importance of learning regarding the application of ethical concepts to AI research has been highlighted. However, as noted in the introduction to this article, while a significant body of work has engaged in developing guidelines and principles for ethical AI with an aim to disseminate and educate a variety of audiences, the operationalisation of these principles into organisational structures, practices, and professional reflection, has received less attention [ 50 , 60 , 72 , 89 , 91 ]. As Resseguier et al., put it: “this identified gap in AI ethics finds its root in the very nature of the currently dominant approach to AI ethics, i.e. a view on ethics that considers it as a softer version of the law. [They] point to the need to complement this approach […and…] call for a shift of attention in AI ethics: away from high-level abstract principles to concrete practice, context and social, political, environmental materialities.” ([ 72 ], 3).

Awareness of these principles and guidelines is important, and has positive impact on intention to consider ethical issues, by providing orienting devices for stakeholders to think with. Specifically, [ 22 ] surveyed > 1,000 managers in the US, randomising presentation of different four groupings of AI regulations and asking about ethics in AI and their intent to adopt AI. They found “ that information about AI regulation increases manager perception of the importance of safety, privacy, bias/discrimination, and transparency issues related to AI. However, there is a tradeoff; regulation information reduces manager intent to adopt AI technologies. ” ([ 22 ], 1). Similar reflections were provided by Miller and Coldicutt [ 59 ] who polled UK ‘tech workers’ ( n  = 1010) finding that 81% of those who worked on AI tools ( n  = 155/192), “would like more opportunities to assess the potential impacts” (p.10). Thus, principles can be useful tools insofar as they offer orienting devices to think with. However, learning to engage in ethical practice goes beyond principles in addressing at least four key concerns:

How do we learn to operationalise principles in context : Principles provide useful anchors, but we must learn how to work with them with particular contexts and people, noting that ethical boundaries may change over time and location. As Resseguier et al. put it: “ethics must entail a sharp attention to specific situations and relations, accounting for the different levels of the personal, the interpersonal, the organisational, up to broader social, political, and environmental configurations” ([ 72 ], 10).

How do we learn to navigate tensions between principles: Classic dilemmas include freedom vs equality, or free speech vs privacy, and there is significant literature on this topic.

How do we learn for a substantive ongoing ethics over procedural ethics: There are questions around (1) how we probe why a tool is being implemented, and (2) whether reinforcement of existing systems closes off opportunity for work that develops futures worth wanting? Focussing on ‘doing things ethically’ can lead to abstracted models of action that fail to interrogate underlying aims in, for example, developing particular tools, including how they intersect with existing power relations [ 6 , 72 ]. As work on Data Feminism highlights [ 24 ], current approaches to AI ethics are inadequate to addressing structural entrenched inequality and the material reality of AI development. As [ 48 ] note, a focus on ethics in the technical design of systems misses significant concerns (including in their proposal for an ethical ‘Algorithmic System for Turning the Elderly into High-Nutrient Slurry’).

How do we assess the indirect, long-range, or soft, impacts of our work: Principles used in research ethics have typically focused on risks to participants, and relatively direct and immediate impacts more broadly (sometimes excluding risks, focussing only on possible benefits). These direct impacts may be relatively predictable, perhaps through a hypothesised pattern of causation and modelling of their likelihood of occurrence. However, many technology systems have broader and long-term impacts. These occur in the ways they may reorganise social relations, and re-shape normative assumptions regarding human value and values.

2.3.3 Resources for learning AI ethics for research

Where, then, should we look for resources to support this learning? In their work surveying 54 and interviewing six AI practitioners, Morley et al., highlight that “the AI ethics community is not yet meeting the needs of AI practitioners” ([ 62 ], 6), with more practitioners saying further resources would be useful than those who say that what exists is already adequate, across a range of types (from principles, to design guidelines, and ‘best practice examples’) [ 62 ]. Where lessons are being drawn from other parts of the community, the historical parallels—for example, the sharing of security flaws in software as a defensive practice—may not carry over into AI [ 76 ]. Various resources exist, including worked cases [ 3 , 67 ], and one helpful example of how one might complete a REC form [ 23 ], and emerging reviews of materials such as those in the ‘Responsible AI Pattern Catalogue’ [ 54 ]. Understanding how these resources are being, or could be, mobilised including via the crucial role of RECs is thus important.

Indeed, this is a challenge across fields: understanding how researchers develop and express research ethics, [ 10 ]. Based on their review of papers discussing ethical issues in research, Beauchemin et al., [ 10 ] highlight a dominance of descriptive ethics, with relatively little use of established definitions or reflection, leading them to call for a greater focus in research outputs on articulating the ethical concepts used [ 10 ]. This is particularly salient given RECs may draw directly on literature (or expect applicants to include relevant disciplinary literature) regarding ethical issues. However, if discussion of ethical issues is unusual in academia’s primary mode of communications—research outputs—where should researchers and RECs look in seeking to increase “sensitivity to ethical issues [and consider] how empirical data may be relevant to various ethical principles and problems.” ([ 27 ], 16).

In their specific analysis of AI in mental health initiatives, Gooding and Kariotis highlight that ethical and legal issues tend not to appear in the peer-reviewed literature, even if they may have been considered in the REC process [ 36 ]. Perhaps as importantly, they also flag that most publications in the space report on pilot work, thus obscuring the potential long-range impacts of the research [ 36 ]. Even more concerningly, one analysis of over 227 publications on health technology (from an initial pool of over 14,000 returned) indicated that approximately half made no reference to ethical principles at all [ 79 ]. In a review of software engineering journals, Badampudi et al., [ 8 ] report that roughly half discuss one of consent, confidentiality, or anonymity, but only 6 of 95 reviewed discuss all three [ 8 ].

Here we see how the roles of advisory bodies, formal RECs, publication processes, and the guidelines and principles come together in an ethics ecosystem [ 73 ]. In this ecosystem “individuals (researchers), organisations (research institutions and the various committees within) and external bodies (publishing houses, funding bodies, professional associations and the governance policies they produce)” ([ 73 ], 317) participate in developing understanding and evaluating of ethical behaviour, through their roles in the research process. Footnote 5 Moreover, we see how different components of a system come together to act on ethical thinking, and provide resources for that thinking. Adopting this view, Chi et al., [ 19 ], analysed AI ethics documentation regarding diversity and inclusion, within three large AI infrastructure companies. This expansion of “the range of documents past high-level corporate principles sheds light on how firms translate principles into action and provides greater clarity about the problems and solutions they hope to address through AI ethics work.” [ 19 ]. Through this analysis, they highlight that diversity and inclusion initiatives within these companies is configured to an “engineering logic”, thus while they claim AI ethics expertise, they act as “ethics allocators” pushing decisions regarding impacts of tools downstream to customers [ 19 ].

Importantly for this paper, they highlight a key claim: That on the one hand, the various sorts of documents or material resources organisations produce and draw on are reflections of (or, ‘containers for’) value statements, while (on the other hand) they also shaping this discourse (reflection on) through the resources they provide and the particular kinds of narrative they encourage and recognize as genuinely ethical. In this way “they are a kind of agent, educating clients, the public, and the broader field, articulating and defending values, developing scripts for ethical action that allocate work and responsibility to internal and external actors, and constructing the knowledge and expertise AI ethics work requires.” ([ 19 ], 2).

3.1 The materiality of research ethics

Despite the significant body of work drawing attention to the ethical impacts of AI, alongside corresponding guidelines and principles, relatively little is known regarding the resources drawn on and produced through the workings of actors within the research ethics ecosystem (including researchers, those impacted by the research including participants, and ethics committee members and secretariat).

These diverse components of ethics ecosystems, including the ethics process itself, are forms of knowledge which, as Freeman and Sturdy [ 34 ], Footnote 6 Footnote 7 put it, are inscribed, embodied, and enacted. Inscribed in different kinds of artefacts that encode knowledge, including ethical principles and templates, that are made available for use across contexts. Embodied within individuals who bring this knowledge to bear in their actions, often in implicit ways. And enacted , in the sense that new knowledge emerges from interactions, and is available for use in, particular contexts. An example offered by Freeman and Sturdy is helpful: “ When a committee convenes, embodied and inscribed knowledge is brought into the room in the form of what each of its individual members knows [embodied], whether through education or experience, and in what has been recorded in the minutes of previous meetings and in the documents prescribing the committee’s remit and procedural rules. But the committee’s knowledge is not limited to what is brought into the meeting. In the course of discussion, the committee may generate new knowledge: new ideas and insights, new aims, and new rules for how to fulfil them [enacted] ” ([ 34 ], 12). How the learning transfers beyond the members of the committee is an important question to ask.

Because many resources that inscribe knowledge in an ethics process are used explicitly as objects by multiple agents—researchers, REC members, stakeholders, compliance organisations, to name a few—these resources act as boundary objects [ 4 , 16 , 78 ]. In this way, resources such as formal policies, standardised forms, principles, and learning resources, are materials that inscribe knowledge in order to span across context and actors. Simultaneously, they are interpreted and reinterpreted in context, thus their meaning is not only held in the resource itself, but in the way its knowledge is mobilised and negotiated (or, enacted). Resources such as REC application forms and the materials to which they refer thus provide textual lenses reflecting both stances in their own right, and instruments that shape discourse [ 63 ].

3.2 Mapping research ethics

In Australia, the primary research ethics document—with legal standing in national research governance systems—is the National Statement on Ethical Conduct in Human Research [ 65 ] (henceforth “National Statement” or just “Statement”). While research ethics naturally extends beyond this document, and the document is grounded in histories of practice, culture, and artefacts, here we will treat it as the first order ethical document. From this document, we can see second order materials arising through the ethics process, at varying steps removed from the National Statement:

Institutions develop ethics forms, intended to support researchers in instantiating responses to the Statement’s key principles.

Researchers then complete these forms, with reference to the Statement, and other soft (e.g. disciplinary guidance documents) and hard (e.g. privacy legislation) policy.

Completed forms are evaluated by RECs, and their evaluations are articulated (using National Statement concepts) with the intent that researchers will respond to them.

Researchers then conduct research, within the scope of their REC approval, and ongoing commitment to ethical practice (which may be further informed by disciplinary and cultural norms).

Journal editors and reviewers, funders, and RECs, will then review submissions at different levels of granularity regarding the completed work, for internal or external reporting/dissemination. These should include reference to the REC process (at minimum reporting that one was completed), and any issues arising.

Therefore, to map the material ethics ecosystem we conducted a review of:

Resources available institutionally, to be drawn on by the REC process.

Where these resources are taken up in research practice, through a systematic search of our internal REC application database, and external publication databases.

And within these materials, an analysis of the REC application detail, supplemented by semi-structured interviews of the respective researchers, and further review of published outputs.

While previous research [ 19 ] has mapped documents from multiple organisations to analyse expressions of values and ethics, we instead focus on a single organisation. In that prior work documents were coded as representing different functions regarding ethics: (1) pedagogic tools; (2) product documentation; (3) legal/policy; (4) general communications. As our focus is on understanding networks of resources linked to specific projects internally, and how this analysis can help us understand the socio-material context of the work reflected, we develop an approach informed by Chi et al. [ 19 ]. Specifically, our analysis maps out the following materials:

Pedagogical tools—specifically guidelines, courses, and scholarly outputs such as reviews of ethics strategies, of participant preferences, etc.

Process resources—these include materials such as ethics proformas in document or web-form format, for example focussing on data protection, and REC application details

Legal and policy instruments—these include statements of principles, the Australian National Statement on Research Ethics [ 65 ], and legal instruments such as relevant privacy legislation

General communications—these include any available communications from the institution referring to relevant issues, made available through general (rather than ethics targeting) channels

Discursive resources including REC consultation, and stakeholder consultation (or other forms of input, such as codesign)

Reflection on practice, including any expression regarding previous experience (e.g. provided in REC applications), or experiences of relevance within the project (e.g. in publications, or public reflections).

As described above, analysis of these material resources frames these resources as providing an expression of, or lens onto, the conceptual space that shapes and is shaped by ethical discourse.

3.3 Interviews

Interviews were conducted based on invitations to researchers from a higher educational institution who had been identified as submitting relevant applications in our search process (described below).

A semi-structured interview schedule was developed, to understand perspectives of the researcher stakeholders regarding their use (or otherwise) of ethical frameworks in their research on data and Artificial Intelligence (AI). Participants were invited to speak about their organisational contexts—which, for some, crossed university and industry settings—and any practical challenges in use of AI and advanced technologies and approaches to address these ethically. Interview questions (Table  2 ) were developed to probe the dimensions described above, regarding:

Developing approaches to ethical concepts and principles

Learning to navigate tensions and challenges

Procedural and substantive ethics: Process (and adequacy) of REC in mediation

Challenges in AI research and soft impacts

The questions were designed to leave open the discussion of principles used and any ways these were identified and navigated by participants, and to allow for discussion of the range of pedagogic, discursive, reflective, legal, and other resources used alongside the formal REC process and any others followed.

Interviews were scheduled for 30–60 min duration, conducted via online video conferencing. They were conducted following a consent process in which we requested access to key REC materials related to projects of relevance (described below), these thus act as an anchor for the interviews, acting as a preliminary stimulus and material artefact. We also provided reference to other principles in advance via the introduction to the interview, including the National Statement, Australia’s Artificial Intelligence Ethics Framework [ 7 ], and the Human Rights and Technology Issues Paper: UTS Submission [ 85 ]. As a semi-structured protocol, not all questions were asked of all participants, although all themes were introduced in all interviews. The initial questions often naturally led to further discussion of ethical issues and the role of the REC, and the interview protocol served as a guide to steer these conversations (even where the questions were not explicitly used).

The interviews were conducted by a single researcher, who also implemented the first analysis of the interview and REC material data. The interviews were professionally transcribed, and these transcriptions were selectively coded alongside other research texts (submitted REC applications and files) drawing on approaches to discourse and document analysis [ 14 , 71 ].

In reporting, the transcription convention used is that […] indicates words were removed (where these are not relevant to the key issue), and [unclear] indicates words that were inaudible or unclear. A non-verbatim transcription is used, with non-linguistic features (gesture, and fillers such as um, er, etc.) not transcribed.

The work was internally funded through a faculty seed grant. REC material may fall under the intellectual property of the institution or be considered internal material for the purposes of evaluation and quality improvement. However, because of the research intent of the work, and the inclusion of semi-structured interviews, a REC application was submitted (ETH216658) and data sharing agreement put in place, building on an earlier application (ETH205567) regarding use of ethical frameworks based on responses to a public consultation on AI ethics. This updated application provided approval for:

a search to be conducted on the REC database for keywords across titles and summaries, with results provided to the authors;

the authors screening these as described above;

the authors contacting lead researchers on relevant projects, to seek their consent to access their full REC materials, and invite them to interview (these were treated as separate consents);

the authors liaising with the REC secretariat to provide consents (where given) for sharing the REC material for the stated purposes; and

using the REC materials to inform the interview discussions, where those occurred.

Separately, we also sought references to REC approval in published works (as above). The reporting here is not intended to identify specific authors or their work, and we have sought in our aggregation and excerpts of interviews and other material to maintain confidence and reduce risk of re-identification.

The reporting here is also not an evaluative reflection of any work noted, at an institutional or individual level. Our analysis is limited to the data available to us, selected through a particular search strategy at a single institution. We have no reason to suppose that this data is particularly unusual, but nor do we make claims about generalisability either at our institution or more broadly. Rather, our interest is in how the process of conducting such analysis may inform understanding of ethics processes, and how our specific study may provide broader insights.

4 Data—instantiations of ai ethics resources in use

4.1 mapping ethics resources.

In our first step, we sought to map the institutional ethics ecosystem, using the model described above, and drawing on the visual representation in Chi et al., (see, [ 19 ], 4). That foundational work analysed multiple technology companies and their expression of ethics and values with respect to diversity and inclusion. A helpful step in their representation was to (1) colour code documents according to department or product space within the organisations, and (2) draw connections to explicitly highlight how documents referred to each other. Neither is appropriate in our case. That is because (1) the documents we are drawing on are all within the research governance space, with the exception of “general communications”, and (2) the documents are highly interconnected in their present form (again, with the exception of general communications). Figure  1 below indicates the set of resources returned through searches of both internal and external sites. In addition to these resources many other materials may be drawn on by individuals, groups to which we do not have access, and from external sources. Our intent here is not to suggest this resource set is exhaustive either of the set of resources within the institutional ecosystem, or—clearly—of the set of relevant resources in the wider ethics ecosystem.

figure 1

Mapping the institutional ethics ecosystem

4.2 Search strategy and output

A term-based search was conducted on all REC applications, using the centralised system through which all such applications are submitted. This system allows for searching over the text-field submissions, which comprise most of the application, barring attachments which typically consist of items such as: consent forms; participant information sheets; budgets; organisational approval letters; data collection materials of various sorts, such as survey instruments and interview protocols; elaborated answers to text fields, such as rationales for particular approaches, study design diagrams, etc. At the point of the initial search, the RECs received 6–700 applications per year across the full REC panels and faculty level delegation.

The initial search was conducted in October 2021, for applications dating from 2015 (when the system was launched). A follow-up search was conducted in September 2022. Applications on which any of the co-authors were an investigator, or involved in the research in a non-investigator role (e.g. advisor, participant, student-of), were excluded. In some cases, no researcher was still at the institution, and these applications were excluded. Some researchers had multiple studies identified, in one case two submissions were discussed in interview; in others, the researchers either declined or did not reply to an earlier invitation, and thus any later applications were also excluded. Results of this search and screening are summarised in Fig.  2 .

figure 2

Search strategy for AI Studies via REC

4.3 Research outputs

To complement our search of REC applications, we conducted a bibliographic search of the Web of Science (WoS) core collection (Fig.  3 ), which provides comparable coverage to Scopus as an indexed article collection [ 12 , 77 ]. This approach was intended to (1) act as a check on further applications that may have been missed in the internal-system search; and (2) provide further insight regarding the expression of ethics by researchers, through analysis of reflections of ethics in their published works. Footnote 8

figure 3

Search strategy for AI Studies via publications

We also conducted a search for obtained REC numbers (e.g. searching for “REC-15000”) in Google Scholar, to supplement the materials in the REC process, though this did not identify any further material.

Finally, we also conducted preliminary searches of the institutional repositories (an Open Access self-hosted repository, and via the Dimensions database, with which we have an institutional arrangement), using a full-text search for the same terms. These searches were not systematically reviewed due to significant overlap with the WoS search which yielded data saturation.

5 Results and analysis of ethics materials and interviews

The materials retrieved were analysed and drawn upon to identify and invite interviewees. From the n = 11 applications shared, the set of resources drawn on explicitly within the application, or via the interview data, were mapped using the framework in 1. As indicated in Fig.  4 there is significant overlap between the resources available in the ethics ecosystem (Fig.  1 ), and those drawn on ( Fig.  4 ) in practice, notably:

The National Statement featured as a central principles document

The REC process itself was explicitly noted as drawn on in ethical consideration

The Australian Privacy Principles (APPs) and generic ‘university policy’ provided some policy context

Discursive resource via colleagues (peers, supervisors or other senior colleagues), and other stakeholders were mentioned as a key resource

figure 4

Mapping Aspects of the institutional ethics ecosystem drawn on in practice. *The interviews of course provide a clear indication of reflection on practice. Here we are specifically interested in examples of resources that are designed to promote reflection, or/and instances where materials (including the interview data) refer explicitly to a prior occurrence of reflection, such as learning from previous experience on a similar project

However, as our interview data indicates, the depth of use of these resources is unclear in places. The pedagogical tools and general communications referred to were targeted at the specifics of the projects, and thus differed significantly from those available via the institutional ecosystem. However, although some resources in this internal ecosystem were relevant to AI, with the AI ethics principles being clearly highly relevant, the former were not drawn upon at all, and the latter were only mentioned once.

Table 3 sets out key responses from the pool of interviewees, mapped against the four key concerns in learning to engage in ethical practice (see p.8). The five researchers interviewed are identified (R1-5), and the topics of their research projects were:

Transcript 1: Understanding (through self-report methods) organisational practices for data projects, using both qualitative and quantitative methodologies (such as path analysis).

Transcript 2: Interviewing developers of an AI system to understand how their design practice avoided bias.

Transcript 3: Developing and deploying a system at a field site, including secondary analysis of data captured on site (with removal of any data that could de-identify people on site, prior to receipt by the researchers).

Transcript 4: Effective delivery of data science initiatives in a specific sector.

Transcript 5: How organisations manage and use their AI technologies.

Interviewees 3 and 4 were building AI tools via their research (others may have been in other capacities). This may suggest that in the process of REC submission, while information regarding methods is elicited, this elicitation does not capture the range of relevant approaches adopted.

Developing approaches to ethical concepts and principles : The participants were invited to consider nationally relevant ethics principles, alongside which they noted national privacy principles, and the European General Data Protection Regulation (GDPR) in passing. Participants referred to self-reflection in contrasting ways, with regard to a trigger for seeking out an ethics framework (R3), and the idea that “My ethical framework is myself, and that’s good enough, I think.” (R1).

Learning to navigate tensions and challenges: Participants referred to challenges in operationalisation of principles not only into practice but into other material forms, for example saying “so this is what the documents say, and how are we going to transfer it into our ethics applications” (R3). This went alongside a sense that outside of the university context, ethics is not a consideration in research and development, with one researcher (R3) who contacted external researchers reporting that “they don’t have ethics or they don’t really care about the ethics around this.” (R3).

In discussing the published outputs of their research, R3 notes that a core concern of their methodology was to ensure that the site of their research could not be re-identified from images contained in these outputs. For instance, R3 described their response to a person who, at a conference presentation, asked about their collection process and ethics: “ I told them, if you want more details, this is the ethics number. […] So, you can contact us. Because [they were] very interested. [They] wanted to do something similar. And [they were] interested about the ethics and the data collection process around it .” (R3). Another (R2) notes their research was informed by a well-known case of “AI failure”, with part of the work investigating how designers seek to avoid these kinds of biases: “ my idea was, are there processes that we can put in place to prevent that from happening ” (2–1). In both cases, we see how knowledge is inscribed in resources made externally available for shared learning.

Tensions between data quality and ethical considerations were a recurring theme, as were reflections on whether standardised processes could help navigate such tensions. For instance, R3, who required images of a physical space, but not the people in it, noted that “one of the main questions that was raised was do we have the consent of those [people] to be appearing in the video. […] if I had set up the cameras by myself, then I would have [inadvertently] captured the [people] ”. To navigate this challenge, secondary data (alternative images) were provided and filtered to ensure people were not visible, but this meant that the images obtained were not captured from a position the researcher would have chosen. Two researchers (R3 and R4) noted the challenge that high-quality imaging increasingly makes it harder to de-identify subjects by filming from a distance, because by-standers may still be recognisable even if they were not the intended target of analysis. R4 commented: “ Then you say, hang on, I got 25 projects trying to do the same. What can we standardise? What’s the guiding principles? What are the governance frameworks? ”. However, as R1 observed, a challenge for such standardisation, and broader concerns about consent, is that particular research methods used may require bespoke (i.e. not standardised) approaches: “ An interesting experience I made recently is that people don’t understand my analysis, not even academics, and that might make it a bit complicated in terms of, maybe, ethics as well .” (R1).

Procedural and substantive ethics: Process (and adequacy) of REC in mediation : Participants reflected on tensions between procedural and wider ethical considerations, including features of the REC process and requirements around such things as data privacy. R5 observed: “it’s stipulated by the university what you need to do and how you need to keep your data […] So, it’s not really an ethical decision. It’s more like there’s rules to follow. So, I don’t need to make any ethical decision.” Later also noting that “ there’s a difference between following the law and an ethical decision” (R5).

Nevertheless, participants recognised that established REC processes supported ethical reflection by encouraging them to “think about things a bit differently” (R1) and “stop and think a moment” (R2), and even suggested: “I think it would be wise if more organisations would have an AI ethics committee to stop and think before they build the AI because there are so many problems around this area. And many organisations don’t stop and think. They just do, and as a result we have a lot of biased AI and a lot of problems. So, I think the concept of having an AI ethics committee can be very, very valuable and we should actually move that from the university to the more corporate world as well.” (R2).

The importance of fine-grained contextual factors, and not merely relying on generic procedural ethics, was also noted. For example, that consent practices must be adapted to specific contexts, moving beyond basic procedural requirements. R3 notes: “We have to establish that dialogue with them. They’re not into reading consent forms, user agreements. So, we have to do the face to face dialogue and to explain to them. And some of them didn’t even realise what machine learning, AI, deep learning means. For them, it’s like they think we’re doing something robotics when we talk about AI. So, it’s very understanding. Different people have different perceptions. […] So, it goes beyond documented consent forms and user agreements. You have to have these dialogues, verbal communication. I think that’s very, very important in AI research. ”.

Perhaps unsurprisingly, a technical framing of ethical concerns was another theme among our participants’ responses—for instance, seeking to employ technical approaches to explainability or bias to proceduralise ethics—and using terms like “explainable”, “ethical”, and “responsible” interchangeably.

Challenges in AI research and soft impacts : Although the REC ethics process was generally seen as rigorous, and our participants viewed their own research as posing relatively low risks, concurrently they also observed that AI more generally might raise more- and different kinds of issues that the REC process is “not reflective of, how should I say, the ethical implications for artificial intelligence for the whole of society. ” (R1).

Two examples of the relatively-uncontroversial nature of many current uses of AI were that human-in-the-loop systems are often used to mediate AI’s decisions, and that the purpose for which AI is used is often relatively tame. As R5 put it: “not to be dramatic or controversial. So, I guess that’s an ethics thing that they are tapering their AI. They’re not making the extreme ” (R5). At the same time, though, they also recognised broader ethical concerns regarding responsibility to the conduct of science and the public’s trust in science: “ I guess that people feel aggrieved or unfairly dealt with. So, I guess if that feeling swelled, there would be less people who’d want to take part in my research if there was that feeling that it was unsafe, unsecure. And then I wouldn’t be able to conduct my research. Or if it grew wider, then no one would conduct any research sort of thing if there’s such mistrust there. '” (R5).

Participants also commented on issues in AI research around soft impacts and commercialisation, including that the ethical use of data and ethical use of AI raise different issues (R4), and their sense that there is a gap in ethical research and development outside of universities: “ outside, people are doing whatever they want ” (R1), and “ AI is not necessarily localised and AI is borderless, and organisations would need to apply to all these different regulations when building the AI, which doesn’t really help in the process. ” (R3).

6 Discussion

Demands are emerging to put into place governance structures for AI research across sectors, inspired by existing research ethics governance models. In light of the findings of this research, we point to key issues and reflections in Table 4 . As the table indicates, findings are largely consistent with prior work. The researchers were generally positive about the REC process as a means to support their reflection and provide oversight, however noting concerns regarding oversight of cross-sector work and long-range impacts. The implication, then, is that in considering the ethics ecosystem (Fig.  5 ), and how it draws on resources (Fig.  4 ), attention should be paid to how ethics governance and reflection can be inscribed so as to cross-institutional and temporal boundaries, in order to foster ethical reflection and action across all research (and in this context, all research involving AI) (Table 4 ).

figure 5

Elaborated research ethics ecosystem

Grounded in the findings reflected in Table 4 , we propose a broader updated ethics ecosystem (Fig.  5 ) that builds on the governance recommendations reviewed (Table  1 ), Samuel et al.’s [ 73 ] ethics ecosystem model, and the features of it described in Sect. 3.2, highlighting the kinds of resources, and their role in learning, borne out in this research.

7 Conclusion

Rising awareness of AI has prompted increasing demands for its ethical governance and a plethora of ethical AI guidelines. RECs have a well-established history in ethics governance, but there have been concerns about their capacity to adequately govern AI research. However, no study to date has examined the ways that AI-related projects engage with the ethics ecosystem, or its adequacy for this context. This project is based on a single institution, of projects identified via the particular search strategy, and notably only of those that undertook a REC application. These contingencies present limitations, although we have no particular reason to believe that the results are particular to our institution (where AI is a strategic focus). Moreover, the model developed for analysing these applications presents a novel approach to understanding and assessing an ethics ecosystem, a contribution with broad application across both university and industry RECs.

Our results suggest that, despite calls for new structures, existing REC models can effectively support consideration of ethical issues in AI research. REC principles and processes were drawn on and referred to by our participants, and — in the Australian REC context at least—are embedded in a lineage of work on research ethics that is continuing to develop. Thus, where new materials are required, we propose that they should be embedded in this existing well-established ecosystem, rather than creating novel governance mechanisms tailored specifically to AI.

Gaps were identified in the resources drawn on, and by participants in the interviews. Participants expressed uncertainty about some practices, and noted that long-range impacts and issues such as secondary use of data may not be effectively addressed in existing guidance. However, it is not clear these issues are addressed in AI ethics guidelines, and indeed only one participant referred to use of AI ethics principles specifically, with multiple participants raising concerns that outside the research ethics context —a context from which these new guidelines have largely emerged — practices were more varied, and less rigorous.

One upshot of our study’s findings is that the development of new AI principles may not be an optimal strategy for addressing ethical issues related to AI. Indeed, it is far from clear that the proliferation of AI-targeted principles has helped in practice. The results indicate that shared artefacts of practice, such as ethics applications and published articles referencing ethics, provide one lens (socio-material) into the practical usage of principles in context. These resources may be used to support learning by individual and organisational stakeholders. In tandem, organisations seeking to engage with ethics and AI should look to the well-established structures of RECs to build on this lineage. RECs themselves may develop further and support uptake in new contexts by evaluating how their communities—REC members, researchers, the public, etc.—learn regarding ethical issues, and where within institutional governance structures the kinds of issues specific to emerging technologies are addressed, and updated in an ongoing way.

Data availability

Due to the nature of the research, and the legal and ethical restrictions on sharing of internal materials, supporting data is not available.

In this work we will use the term ‘AI’ in a broad sense to refer to techniques that include both symbolic reasoning (e.g. expert systems) and statistical reasoning (I.e. the wide range of techniques often collectively referred to as “machine learning” (ML), as well as hybrid techniques that employ both en ensemble of statistical and symbolic reasoning techniques, targeting tasks that would otherwise require human intelligence, following the early Dartmouth workshop definition of AI [ 57 ].

We limit discussion here to human research ethics here, although similar systems exist in the context of the ethics of research involving animals.

The nature of ‘human subjects’ and personal data is contested in the context of big data research, which often draws on publicly available datasets [ 58 ]. For this reason, alternatives to consent have been explored (e.g. [ 69 ]. Indeed, disagreements and perceptions of varying practices across researchers, and across academic-industry located research, exist across the Belmont principles with respect to use of online data in computer science [ 87 ].

Similar findings were reported in these two further projects. The large EU SIENNA project which surveyed REC members regarding specific technologies including AI and Robotics, with no consistent resources used, some respondents indicating existing guidance sufficed, and others seeking further targeted support [ 81 ]. And a survey of US IRB committees with respondents from 63 distinct institutions, which similarly found both mixed responses to what should be required of researchers, and to questions regarding the IRB capability to assess proposals involving data and AI [ 86 ]

The research ethics ecosystem can of course also be connected to other research institution policies, including data and privacy regulation (and committees relating to these), and the broader structures and regulation for responsible AI beyond research contexts and the relevant material resources and their design characteristics [ 54 ].

A similar framing is provided by [ 83 ] in analysis of research ethics.

As an aside regarding the social nature of research. The lead author attended a workshop run by these authors as part of a large EU project just as they entered postgraduate research (12 years ago); the benefits of academic meetings are often slow, and diffuse, a point which is salient in consideration of immediate and long-range impacts and consideration of knowledge infrastructure.

While this approach was intended to augment our internal search, it may underreport on relevant material given that (1) WoS provides an incomplete archive of all scholarly works; and (2) WoS search is based on article metadata (including title abstract and keywords), and not full text. However, in contrast to more complete indexes such as Google Scholar [ 56 ] WoS provides more advanced search functionality, including full Boolean search and search over metadata fields. This is particularly significant when searching for terms such as “REC” or “Research Ethics Committee” where their discussion may be incidental. A limitation of this approach is that it requires ‘ethics’ to be explicitly mentioned, however in our context this maximises the chances of retrieving publications with a substantive discussion of ethics.

See p.9 discussion which indicates that levels of reporting in publications are low.

Ada Lovelace Institute. 2021. ‘Supporting AI Research Ethics Committees’. 2021. https://www.adalovelaceinstitute.org/project/ai-research-ethics-committees/ .

Ada Lovelace Institute. 2022a. ‘Looking before We Leap Expanding Ethical Review Processes for AI and Data Science Research’. Ada Lovelace Institute. https://www.adalovelaceinstitute.org/wp-content/uploads/2022/12/Ada-Lovelace-Institute-Looking-before-we-leap-Dec-2022.pdf .

Ada Lovelace Institute. 2022b. ‘Looking before We Leap Expanding Ethical Review Processes for AI and Data Science Research Case Studies’. Ada Lovelace Institute. https://www.adalovelaceinstitute.org/wp-content/uploads/2022/12/Ada-Lovelace-Institute-Looking-before-we-leap-Case-studies-Dec-2022.pdf .

Akkerman, S., Bakker, A.: Boundary crossing and boundary objects. Rev. Educ. Res. 81 (2), 132–169 (2011). https://doi.org/10.3102/0034654311404435

Article   Google Scholar  

Allen, G.: Getting beyond form filling: the role of institutional governance in human research ethics. J. Acad. Ethics 6 (2), 105–116 (2008). https://doi.org/10.1007/s10805-008-9057-9

Attard-Frost, B., Andrés De los, R., and Deneille R. Walters. 2022. ‘The Ethics of AI Business Practices: A Review of 47 AI Ethics Guidelines’. AI and Ethics , April. https://doi.org/10.1007/s43681-022-00156-6

Australian Government: Department of Industry, Science, Energy and Resources. 2019. ‘Artificial Intelligence: Australia’s Ethics Framework and Consultation’. 2019. https://webarchive.nla.gov.au/awa/20200921003335/https://consult.industry.gov.au/strategic-policy/artificial-intelligence-ethics-framework/ .

Badampudi, Deepika, Farnaz Fotrousi, Bruno Cartaxo, and Muhammad Usman. 2022. ‘Reporting Consent, Anonymity and Confidentiality Procedures Adopted in Empirical Studies Using Human Participants’. E-Informatica Softw. Eng. J. 16 (1): 220109. https://doi.org/10.37190/e-Inf220109 .

Barke, R.: Balancing uncertain risks and benefits in human subjects research. Sci. Technol. Hum. Values 34 (3), 337–364 (2009). https://doi.org/10.1177/0162243908328760

Beauchemin, É., Côté, L.P., Drolet, M.-J., Williams-Jones, B.: Conceptualising ethical issues in the conduct of research: results from a critical and systematic literature review. J. Acad. Ethics 20 (3), 335–358 (2022). https://doi.org/10.1007/s10805-021-09411-7

Bernstein, M.S., Levi, M., Magnus, D., Rajala, B.A., Satz, D., Waeiss, C.: Ethics and society review: ethics reflection as a precondition to research funding. Proc. Natl. Acad. Sci. U.S.A. 118 (52), e2117261118 (2021). https://doi.org/10.1073/pnas.2117261118

Article   CAS   PubMed   PubMed Central   Google Scholar  

Birkle, C., Pendlebury, D.A., Schnell, J., Adams, J.: Web of science as a data source for research on scientific and scholarly activity. Quantitative Sci. Stud. 1 (1), 363–376 (2020). https://doi.org/10.1162/qss_a_00018

Blackman, R. 2022. ‘Why You Need an AI Ethics Committee’. Harvard Business Review , 1 July 2022. https://hbr.org/2022/07/why-you-need-an-ai-ethics-committee .

Bondarouk, T., and Huub, R. 2004. ‘Discourse Analysis: Making Complex Methodology Simple’. In ECIS 2004 Proceedings . https://ris.utwente.nl/ws/portalfiles/portal/5405415/ECIS2004-1.pdf .

Bosch, N., Say Chan, A., Davis, J. L., Gutiérrez, R., He, J., Karahalios, K., Koyejo, S. et al. 2022. ‘Artificial Intelligence and Social Responsibility: The Roles of the University’. A white paper by University of Illinois Urbana-Champaign. https://cra.org/ccc/wp-content/uploads/sites/2/2022/11/Symposium-on-Artificial-Intelligence-and-Social-Responsibility-.pdf .

Bowker, G.C., Star, L.S.: Sorting things out: classification and its consequences. MIT Press, Cambridge, MA (1999)

Google Scholar  

Brown, C., Spiro, J., Quinton, S.: The role of research ethics committees: friend or foe in educational research? an exploratory study. Br. Edu. Res. J. 46 (4), 747–769 (2020). https://doi.org/10.1002/berj.3654

Carniel, J., Hickey, A., Southey, K., Brömdal, A., Crowley-Cyr, L., Eacersall, D., Farmer, W., Gehrmann, R., Machin, T., Pillay Y. 2022. ‘The ethics review and the humanities and social sciences: disciplinary distinctions in ethics review processes’. Research Ethics , December, 17470161221147202. https://doi.org/10.1177/17470161221147202 .

Chi, Nicole, Emma Lurie, and Deirdre K. Mulligan. 2021. ‘Reconfiguring Diversity and Inclusion for AI Ethics’. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society , 447–57. Virtual Event USA: ACM. https://doi.org/10.1145/3461702.3462622 .

Coleman, C.H., Bouësseau, M.-C.: How do we know that research ethics committees are really working? The neglected role of outcomes assessment in research ethics review. BMC Med. Ethics 9 (1), 1–7 (2008). https://doi.org/10.1186/1472-6939-9-6

Cross, J.E., Pickering, K., Hickey, M.: Community-based participatory research, ethics, and institutional review boards: untying a gordian knot. Crit. Sociol. 41 (7–8), 1007–1026 (2015). https://doi.org/10.1177/0896920513512696

Cuellar, M.-F., Larsen, B., Lee, Y.S., Webb, M.: Does information about AI regulation change manager evaluation of ethical concerns and intent to adopt AI? Journal of Law Economics & Organization, April. (2022). https://doi.org/10.1093/jleo/ewac004

Delft University. 2022. ‘Delft University of Technology HUMAN RESEARCH ETHICS COMPLETING THE HREC CHECKLIST (Version January 2022)’. 2022. https://d2k0ddhflgrk1i.cloudfront.net/TUDelft/Over_TU_Delft/Strategie/Integriteitsbeleid/Research%20ethics/2_CHC-completing%20the%20HREC%20checklist_2022.pdf .

D’ignazio, C., and L. F. Klein. 2020. Data Feminism . MIT press.

Doerr, M., Meeder, S.: Big health data research and group harm: the scope of IRB review. Ethics Hum. Res. 44 (4), 34–38 (2022). https://doi.org/10.1002/eahr.500130

Article   PubMed   Google Scholar  

Drolet, M.-J., Rose-Derouin, E., Leblanc, J.-C., Ruest, M., Williams-Jones, B.: Ethical issues in research: perceptions of researchers, research ethics board members and research ethics experts. J. Acad. Ethics August. (2022). https://doi.org/10.1007/s10805-022-09455-3

DuBois, J.M., Volpe, R.L., Rangel, E.K.: Hidden empirical research ethics: a review of three health journals from 2005 through 2006. J. Empir. Res. Hum. Res. Ethics 3 (3), 7–18 (2008). https://doi.org/10.1525/jer.2008.3.3.7

Article   PubMed   PubMed Central   Google Scholar  

Eto, T. 2022. ‘Conducting an Effective IRB Review of Artificial Intelligence Human Subjects Research (AI HSR)’. Technology In Human Subjects Research. https://techinhsr.com/wp-content/uploads/2022/08/AI-HSR-WHITE-PAPER-TechInHSR-08.2022-1.pdf .

Ferretti, A. 2021. ‘Ethics and Governance of Big Data in Health Research and Digital Health Applications’. Doctoral Thesis, ETH Zurich. https://doi.org/10.3929/ethz-b-000489154 .

Ferretti, A., Ienca, M., Sheehan, M., Blasimme, A., Dove, E.S., Farsides, B., Friesen, P., Kahn, J., Karlen, W., Kleist, P.: Ethics review of big data research: What should stay and what should be reformed? BMC Med. Ethics 22 (1), 1–13 (2021). https://doi.org/10.1186/s12910-021-00616-4

Ferretti, A., Ienca, M., Velarde, M.R., Hurst, S., Vayena, E.: The challenges of big data for research ethics committees: a qualitative swiss study. J. Empir. Res. Hum. Res. Ethics 17 (1–2), 129–143 (2022). https://doi.org/10.1177/15562646211053538

Fjeld, J., Achten, N., Hilligoss, H., Nagy, A., Srikumar, M., 2020. ‘Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI’. SSRN Scholarly Paper. Rochester, NY. https://doi.org/10.2139/ssrn.3518482 .

Frauenberger, C., Rauhala, M., Fitzpatrick, G.: In-action ethics. Interact. Comput. 29 (2), 220–236 (2017). https://doi.org/10.1093/iwc/iww024

Freeman, R., and Steve Sturdy. 2014. ‘Introduction: Knowledge in Policy—Embodied, Inscribed, Enacted’. In Knowledge in Policy .

Friesen, P., Douglas-Jones, R., Marks, M., Pierce, R., Fletcher, K., Mishra, A., Lorimer, J., Véliz, C., Hallowell, N., Graham, M.: Governing AI-driven health research: are IRBs up to the task? Ethics Hum. Res. 43 (2), 35–42 (2021). https://doi.org/10.1002/eahr.500085

Gooding, P., Kariotis, T.: Ethics and law in research on algorithmic and data-driven technology in mental health care: scoping review. Jmir Mental Health 8 (6), e24668 (2021). https://doi.org/10.2196/24668

Goodyear-Smith, F., Lobb, B., Davies, G., Nachson, I., Seelau, S.M.: International variation in ethics committee requirements: comparisons across five westernised nations. BMC Med. Ethics 3 (1), 1–8 (2002). https://doi.org/10.1186/1472-6939-3-2

Guillemin, M., Gillam, L.: Ethics, reflexivity, and “ethically important moments” in research. Qual. Inq. 10 (2), 261–280 (2004). https://doi.org/10.1177/1077800403262360

Health Research Authority. 2022. ‘Improving Our Review of Research Using AI and Data-Driven Technologies’. Health Research Authority. 2022. https://www.hra.nhs.uk/planning-and-improving-research/research-planning/how-were-supporting-data-driven-technology/sddr/improving-our-review-research-using-ai-and-data-driven-technologies/ .

Hickey, A., Davis, S., Farmer, W., Dawidowicz, J., Moloney, C., Lamont-Mills, A., Carniel, J., et al.: Beyond criticism of ethics review boards: strategies for engaging research communities and enhancing ethical review processes. J. Acad. Ethics 20 (4), 549–567 (2022). https://doi.org/10.1007/s10805-021-09430-4

Hine, C.: Evaluating the prospects for university-based ethical governance in artificial intelligence and data-driven innovation. Res. Ethics 17 (4), 464–479 (2021). https://doi.org/10.1177/17470161211022790

Hutson, M. 2021. ‘Who Should Stop Unethical A.I.?’ The New Yorker , 15 February 2021. https://www.newyorker.com/tech/annals-of-technology/who-should-stop-unethical-ai .

‘IEEE Ethics In Action in Autonomous and Intelligent Systems | IEEE SA’. n.d. Ethics In Action | Ethically Aligned Design. Accessed 10 January 2023. https://ethicsinaction.ieee.org/ .

Israel, M. 2015. ‘Regulating Ethics’. In Research Ethics and Integrity for Social Scientists: Beyond Regulatory Compliance . 1 Oliver’s Yard, 55 City Road, London EC1Y 1SP United Kingdom: SAGE Publications Ltd. https://doi.org/10.4135/9781473910096 .

Jobin, A., Ienca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nat Mach Intell 1 (9), 389–399 (2019). https://doi.org/10.1038/s42256-019-0088-2

Jordan, S. R. 2019. ‘Designing an artificial intelligence research review committee’. Future of Privacy Forum.

Journal of Empirical Research on Human Ethics. n.d. ‘Journal of Empirical Research on Human Ethics: Manuscript Preparation’. Accessed 13 January 2023. https://journals.sagepub.com/pb-assets/cmscontent/JRE/JERPrep.pdf .

Keyes, O., Hutson, J., Durbin, M. 2019. ‘A Mulching proposal: analysing and improving an algorithmic system for turning the elderly into high-nutrient slurry’. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems , 1–11. Glasgow Scotland Uk: ACM. https://doi.org/10.1145/3290607.3310433 .

Khan, A.A., Badshah, S., Liang, P., Waseem, M., Khan, B., Ahmad, A., Fahmideh, M., Niazi, M., Azeem Akbar, M. 2022. ‘Ethics of AI: A Systematic Literature Review of Principles and Challenges’. In The International Conference on Evaluation and Assessment in Software Engineering 2022 , 383–92. EASE ’22. Gothenburg Sweden: ACM. https://doi.org/10.1145/3530019.3531329 .

Kitto, K., Knight, S.: Practical ethics for building learning analytics. Br. J. Edu. Technol. 50 (6), 2855–2870 (2019). https://doi.org/10.1111/bjet.12868

Knight, S., Shibani, A., Shum, S.B.: A reflective design case of practical ethics in learning analytics. Br. J. Edu. Technol. (2023). https://doi.org/10.1111/bjet.13323

Leetaru, K. 2016. ‘Are Research Ethics Obsolete In The Era Of Big Data?’ Forbes. 2016. https://www.forbes.com/sites/kalevleetaru/2016/06/17/are-research-ethics-obsolete-in-the-era-of-big-data/ .

Leetaru, K. 2017. ‘AI “Gaydar” And How The Future Of AI Will Be Exempt From Ethical Review’. Forbes. 2017. https://www.forbes.com/sites/kalevleetaru/2017/09/16/ai-gaydar-and-how-the-future-of-ai-will-be-exempt-from-ethical-review/ .

Lu, Q., Zhu, L., Xu, X., Whittle, J., Zowghi, D. , Jacquet, A. 2022. ‘Responsible AI Pattern Catalogue: A Multivocal Literature Review’. arXiv. https://doi.org/10.48550/arXiv.2209.04963 .

Macdonald, H. 2014. ‘Transnational Excursions: The Ethics of Northern Anthropological Investigations Going South’. Ethical Quandaries in Social Research , December. https://www.academia.edu/11524903/Transnational_excursions_The_ethics_of_northern_anthropological_investigations_going_south .

Martín-Martín, A., Thelwall, M., Orduna-Malea, E., López-Cózar, E.D.: Google scholar, microsoft academic, scopus, dimensions, web of science, and opencitations’ COCI: a multidisciplinary comparison of coverage via citations. Scientometrics 126 (1), 871–906 (2021). https://doi.org/10.1007/s11192-020-03690-4

Article   CAS   PubMed   Google Scholar  

McCarthy, J., Minsky, M.L., Rochester, N., Shannon, C.E.: A proposal for the dartmouth summer research project on artificial intelligence, August 31, 1955. AI Mag. 27 (4), 12–12 (1955)

Metcalf, J., Crawford, K.: Where are human subjects in big data research? The emerging ethics divide. Big Data Soc. 3 (1), 2053951716650211 (2016). https://doi.org/10.1177/2053951716650211

Miller, C, Coldicutt, R. 2019. ‘People, Power and Technology: The Tech Workers’ View’. doteveryone. https://doteveryone.org.uk/report/workersview/ .

Mittelstadt, B.: Principles alone cannot guarantee ethical AI. Nat. Mach. Intell. 1 (11), 501–507 (2019). https://doi.org/10.1038/s42256-019-0114-4

Molina, J.L., Borgatti, S.P.: Moral Bureaucracies and Social Network Research. Soc. Netw. 67 , 13–19 (2021). https://doi.org/10.1016/j.socnet.2019.11.001

Morley, J., Kinsey, L., Elhalal, A., Garcia, F., Ziosi, M., Floridi, L.: Operationalising AI Ethics: Barriers, Enablers and next Steps. AI & Soc. November. (2021). https://doi.org/10.1007/s00146-021-01308-8

Morton, J.: “Text-Work” in research ethics review: the significance of documents in and beyond committee meetings. Account. Res. 25 (7–8), 387–403 (2018). https://doi.org/10.1080/08989621.2018.1537790

Munteanu, C., Molyneaux, H., Moncur, W., Romero, M., O’Donnell, S., Vines, J. 2015. ‘Situational Ethics: Re-Thinking Approaches to Formal Ethics Requirements for Human-Computer Interaction’. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems , 105–14. CHI ’15. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2702123.2702481 .

National Statement. 2018. ‘National Statement on Ethical Conduct in Human Research’. National Health and Medical Research Council, the Australian Research Council and Universities Australia. https://www.nhmrc.gov.au/about-us/publications/national-statement-ethical-conduct-human-research-2007-updated-2018 .

Office for Human Research Protections (OHRP). 1978. ‘Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research’. https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/read-the-belmont-report/index.html .

Pater, J., Fiesler, C., Zimmer, M. 2022. No Humans Here: Ethical Speculation on Public Data, Unintended Consequences, and the Limits of Institutional Review. Proceedings of the ACM on Human-Computer Interaction 6 (GROUP): 38:1–38:13. https://doi.org/10.1145/3492857 .

Petrozzino, C.: Who pays for ethical debt in AI? AI and Ethics 1 (3), 205–208 (2021). https://doi.org/10.1007/s43681-020-00030-3

Pickering, B.: Trust, but verify: informed consent, AI technologies, and public health emergencies. Future Internet 13 (5), 132 (2021). https://doi.org/10.3390/fi13050132

Pieper, I., Thomson, C.J.H.: Contextualising merit and integrity within human research. Monash Bioeth. Rev. 29 (4), 39–48 (2011). https://doi.org/10.1007/BF03351329

Rapley, T. 2007. Doing Conversation, Discourse and Document Analysis . 1 Oliver’s Yard, 55 City Road, London England EC1Y 1SP United Kingdom: SAGE Publications Ltd. https://doi.org/10.4135/9781849208901 .

Resseguier, A., Rodrigues, R., Santiago, N. 2021. Ethics as Attention to Context: Recommendations for AI Ethics Annex to D5.4: Multi-Stakeholder Strategy and Tools for Ethical AI and Robotics. Sienna Project. https://www.sienna-project.eu/digitalAssets/915/c_915542-l_1-k_ethics-as-attention_sienna_jan-2021.pdf .

Samuel, G., Derrick, G., van Leeuwen, T.: The ethics ecosystem: personal ethics, network governance and regulating actors governing the use of social media research data. Minerva 57 (3), 317–343 (2019). https://doi.org/10.1007/s11024-019-09368-3

Sandler, R., and Basl, J. 2019. Building data and ai ethics committees. Accenture and Ethics Institute at Northeastern University. https://www.accenture.com/_acnmedia/pdf-107/accenture-ai-and-data-ethics-committee-report-11.pdf .

Schiff, D., Borenstein, J., Biddle, J., Laas, K.: AI ethics in the public, private, and NGO sectors: a review of a global document collection. IEEE Trans. Technol. Soc. 2 (1), 31–42 (2021). https://doi.org/10.1109/TTS.2021.3052127

Shevlane, T., and Dafoe, A. 2020. The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse? In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society , 173–79. AIES ’20. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3375627.3375815 .

Singh, V.K., Singh, P., Karmakar, M., Leta, J., Mayr, P.: The journal coverage of web of science, scopus and dimensions: a comparative analysis. Scientometrics 126 (6), 5113–5142 (2021). https://doi.org/10.1007/s11192-021-03948-5

Star, S.L., Griesemer, J.R.: Institutional ecology, translations’ and boundary objects: amateurs and professionals in Berkeley’s museum of vertebrate zoology, 1907–39. Soc. Stud. Sci. 19 (3), 387–420 (1989). https://doi.org/10.1177/030631289019003001

Steerling, E., Houston, R., Gietzen, L.J., Ogilvie, S.J., de Ruiter, H.-P., Nygren, J.M.: Examining how ethics in relation to health technology is described in the research literature: scoping review. Interactive Journal of Medical Research 11 (2), e38745 (2022). https://doi.org/10.2196/38745

Swierstra, T.: Identifying the normative challenges posed by technology’s “soft” impacts. Etikk i Praksis - Nordic Journal of Applied Ethics, 1 (May), 5–20 (2015). https://doi.org/10.5324/EIP.V9I1.1838

Tambornino, L., Lanzerath, D. Rodrigues, R., Wright, D. 2019. SIENNA D4.3: Survey of REC Approaches and codes for artificial intelligence & robotics, August. https://doi.org/10.5281/zenodo.4067990 .

The Department of Health and Human Services. n.d. 45 CFR Part 46 (2018–07–19)—Protection of Human Subjects . Protection of Human Subjects . Vol. Title 45. Accessed 10 January 2023. https://www.ecfr.gov/on/2018-07-19/title-45/subtitle-A/subchapter-A/part-46 .

Tummons, J. 2022. The Many Worlds of Ethics: Proposing a Latourian Investigation of the Work of Research Ethics in Ethnographies of Education. In Ethics, Ethnography and Education , edited by Lisa Russell, Ruth Barley, and Jonathan Tummons, 19:11–28. Studies in Educational Ethnography. Emerald Publishing Limited. https://doi.org/10.1108/S1529-210X20220000019002 .

UKRI. 2022. ‘Embedding Ethics in Artificial Intelligence Research’. 2022. https://www.ukri.org/about-us/how-we-are-doing/research-outcomes-and-impact/ahrc/embedding-ethics-in-artificial-intelligence-research/ .

UTS.: AHRC Human Rights and Technology Issues Paper: UTS Response and Submission. University of Technology Sydney (2018). https://www.uts.edu.au/sites/default/files/2018-12/Human%20Rights%20%26%20Technology%20Issues%20Paper_UTS%20submission.pdf

Vitak, J., Proferes, N., Shilton, K., Ashktorab, Z.: Ethics regulation in social computing research: examining the role of institutional review boards. J. Empir. Res. Hum. Res. Ethics 12 (5), 372–382 (2017). https://doi.org/10.1177/1556264617725200

Vitak, J., Shilton, K., Ashktorab, Z. 2016. Beyond the Belmont Principles: Ethical Challenges, Practices, and Beliefs in the Online Data Research Community. In Proceedings of the 19th ACM Conference on Computer-Supported Cooperative Work & Social Computing , 941–53. CSCW ’16. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/2818048.2820078 .

Weinbaum, C., Landree, E., Blumenthal, M., Piquado, T., Gutierrez, C.: Ethics in scientific research: an examination of ethical principles and emerging topics. RAND Corporation (2019). https://doi.org/10.7249/RR2912

Whittlestone, J., Nyrup, R., Alexandrova, A., Cave, S. 2019. The role and limits of principles in ai ethics: towards a focus on tensions. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society , 195–200. AIES ’19. New York, NY, USA: Association for Computing Machinery. https://doi.org/10.1145/3306618.3314289 .

Zhang, J.J.: Research ethics and ethical research: some observations from the global south. J. Geogr. High. Educ. 41 (1), 147–154 (2017). https://doi.org/10.1080/03098265.2016.1241985

Article   CAS   Google Scholar  

Zhou, J., and Chen, F. 2022. ‘AI Ethics: From Principles to Practice’. AI & SOCIETY , 1–11.

Download references


The authors would like to acknowledge the support of the UTS Research Ethics secretariat, in particular Racheal Laugery, for their assistance in this project. Our thanks to the researchers who generously shared their materials and time with us in interviews for their contribution. Mark Israel (Australasian Human Research Ethics Consultancy Services, AHRECS), provided helpful input on a number of issues regarding RECs particularly in international comparison. Our thanks too to Linda Przhedetsky for her research assistance.

Open Access funding enabled and organized by CAUL and its Member Institutions.

Author information

Authors and affiliations.

University of Technology Sydney, TD School, PO Box 123, Broadway, NSW, 2007, Australia

Simon Knight, Antonette Shibani & Nicole Vincent

Centre for Research on Education in a Digital Society (CREDS) School, University of Technology Sydney, PO Box 123, Broadway, NSW, 2007, Australia

Simon Knight

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Simon Knight .

Ethics declarations

Conflict of interest.

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Knight, S., Shibani, A. & Vincent, N. Ethical AI governance: mapping a research ecosystem. AI Ethics (2024). https://doi.org/10.1007/s43681-023-00416-z

Download citation

Received : 14 September 2023

Accepted : 22 December 2023

Published : 14 February 2024

DOI : https://doi.org/10.1007/s43681-023-00416-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Research ethics
  • Research governance
  • Artificial intelligence
  • Professional learning
  • Sociomaterial
  • Find a journal
  • Publish with us
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • J Indian Assoc Pediatr Surg
  • v.25(6); Nov-Dec 2020

Ethics in Research and Publication

Pradyumna pan.

Ashish Hospital and Research Centre, Pediatric Surgery Unit, Jabalpur, Madhya Pradesh, India

Published articles in scientific journals are a key method for knowledge-sharing. Researchers can face the pressures to publish and this can sometimes lead to a breach of ethical values, whether consciously or unconsciously. The prevention of such practices is achieved by the application of strict ethical guidelines applicable to experiments involving human subjects or biological tissues. Editors too are faced with ethical problems, including how best to handle peer-review bias, and find reviewers with experience, probity, and professionalism. This article emphasizes that authors and their sponsoring organizations need to be informed of the importance of upholding the guidelines in research and ethical rules when disclosing scientific work.


Accurate reporting of results of research depends on the integrity of the authors, their application of and compliance with guidelines relating to the assurance of an ethical approach throughout and also on robust institutional research governance protocols ensuring that study design, conduct, and analysis of research and the publishing process all comply to an ethical framework. There is a growing concern that research misconduct over the past two decades has become more common.[ 1 ] It is challenging to determine whether this apparent increase is a true increase in the number of misconducts committed or detection has increased during this period.[ 2 ]


It is important that persons involved in the research must be compliant with the ethical framework in which they should function. The Committee on Publication Ethics (COPE) published guidelines on Good Publication Practice in 1999[ 3 ] and continues to update these regularly.[ 4 ]

Study design

The design of the study is a collection of methods and procedures used to gather and analyze the data on variables defined in a research. A poorly designed study can never be recovered, whereas an inadequately analyzed study can be re-analyzed to reach a meaningful conclusion.[ 5 ] The study design should be clearly expressed in a written protocol. In clinical studies, the number of participants to be included in the analysis should be sufficiently large to give a definitive result. Local ethical research committees should hold back approval until the deficiencies in the design of the study have been corrected. All investigators should agree on the final protocol, and their contributions should be clearly defined.

Ethical approval

For all studies involving individuals or medical records, approval from a duly appointed research ethics committee is necessary. The research protocol should adhere strictly to the international standards such as those of the Council for International Medical Science Organizations.[ 6 ]

When human tissues or body fluids have been collected for one project for which ethical authorization and consent has been obtained, these preserved specimens cannot be used again without further permission. It should be presumed that no author can publish research of humans or animals that do not follow the ethical standards of the country where the article is published.[ 2 ]

Data analysis

The data analysis methodology should be clearly stated in the protocol. The variations such as post hoc analysis or data omission should be agreed upon and reported in the paper by all investigators.[ 7 ] The capacity for manipulating data electronically now is enormous. Original images should always be retained and any alteration should be revealed.

The International Committee of Medical Editors (the Vancouver Group) has developed authorship guidelines that allow each writer to make a substantial contribution throughout the process.[ 8 ] In the past honorary authorship had been employed widely. However, the concept that the professor or department head should inevitably find his/her way to a paper is no longer acceptable. Each contributor should be able to mention clearly how they took part in the study. Each author must take public responsibility for the work published in the journal, and it is desirable to have one senior author, to serve as a guarantor. Participation in fundraising, data collection, or general supervision of the research is insufficient for authorship. Authorship acknowledgment should be based on substantial contributions to: (1) concept and design, (2) interpretation of data, (3) drafts and critical revisions of intellectual content, and (4) final approval of the version to be published.[ 2 ]

There is a possible conflict of interest when an investigator, writer, publisher, or reviewer has a financial, personal interest, or opinion that may impair their objectivity, or improperly influence their behavior. Financial ties are the most visible competing interests. As a result of personal relationships, academic rivalry, and intellectual zeal, competing interests can also exist. Competing interests are not unethical as long as they are revealed. They should be disclosed to the ethics committee and to the editor of the journal to which a article is submitted.


Peer review is the method used to evaluate the quality of articles submitted to a journal. COPE has developed ethical guidelines for peer reviewers.[ 9 ] The affiliation between the author, the editor, and the peer reviewer is a confidential collaboration. It is only with the editor's permission the manuscript should be passed on to a colleague or other individuals. A reviewer or editor should not use the information contained in the paper for their benefit.[ 2 ] Journals should have clearly defined and communicated policies on the type of peer review used, for example, single-blinded, double-blinded, open, or postpublication.[ 10 ] Peer reviewers can play a vital function in figuring out data fabrication, falsification, plagiarism, image manipulation, unethical research, biased reporting, authorship abuse, redundant or duplicate publication, and undeclared conflicts of interest.[ 11 ]


Editors are the wardens of the scientific literature and are responsible for maintaining high research and publishing ethics standards. There may be competing interests among participants, and it is the responsibility of the editor to ensure that they do not affect the journal. They should not be hesitant to publish work that challenges previously published studies in their journal, and they should not reject studies with negative results.[ 2 ] Editors must act promptly if a published paper is found to have publication misconduct.[ 12 ]


Research misconduct represents a spectrum ranging from the errors of judgment (mistakes made in good faith) to deliberate fraud, usually categorized as fabrication, falsification, and plagiarism.[ 13 ]

Falsification is the changing or omission of research results (data) or manipulation of images or representations in a manner that distorts the data to support claims or hypotheses.[ 13 ]

Fabrication is the construction or addition of data, observations, or characterizations that never occurred in the gathering of data or running of experiments.[ 13 ]

Plagiarism is the use of another individual or group's published work or unpublished ideas, language, thoughts, ideas, or expressions and makes the representation of them as one's original work.[ 14 ] The advent of digital material and its ease of accessibility have accelerated the use of plagiarism.[ 15 ] In some instances, plagiarism is used as a tool to cover up language problems for those whom English is not their first language. Where language is a problem, the authors should always be encouraged to obtain help in preparing their manuscript and not resort to using other people's words. It is unacceptable to republish a paper with minor changes, without referring to the primary publication, and to present it to the readership as a new source.[ 16 ]


Redundant publication (sometimes referred to as duplicate or triplicate publication) is the term used when two or more papers that overlap in a significant way are published in different journals without cross-reference.[ 17 ] It is not uncommon for two or more papers involving the same or similar patient database to be published in sequence. The authors should disclose this to the editor and make a cross-reference to previous papers. It is permissible to publish a paper in another language as long as this is disclosed.

Motives for misconduct

The motives why investigators fabricate records are not understood. Improving understanding of why researchers commit misconduct and detrimental research practices (DRPs) is essential. A range of possible reasons are: (1) career and funding pressures, (2) institutional failures of oversight, (3) commercial conflicts of interest, (4) inadequate training, (5) erosion of standards of mentoring, and (6) part of a larger pattern of social deviance.[ 18 ]

Prevention of misconduct

The widespread nature of research and publication misconduct indicates that existing control measures are inadequate. Enhanced methods for detecting misconduct are required. Even if research policing were made more effective, the fundamental question of why certain individuals violate their duties as a scientist or medical researcher intentionally or unintentionally would not be addressed. Clear guidance on ethics should be emphasized during research training in all institutions actively involved in research.[ 19 ] Training is a crucial step in avoiding publication misconduct. All researchers should be presented with organizational guidance and publishing ethics when they join a new organization. Misconduct in the study may be more common when investigators are alone with an inadequate review of data by a project supervisor. Research integrity depends on excellent communication between contributors, with frequent discussion of project progress and openness to any difficulties in adhering to the research protocol. Everyone should agree with the changes to the protocol. Maintaining documents must be of the highest quality. The law requires data and photographic record of experimental results to be maintained for 15 years. The records of laboratory experiments should be held in the department where the study is carried out and should be available for review for at least 15 years.

Strategies to support research integrity

  • Ensure policies governing academic research not only are in place but are followed
  • Enforce expectations for process rigor
  • Communicate expectations for accurate accounting of time spent on research activities
  • Evaluate the grant accounting function
  • Establish an office of research integrity.[ 20 ]


Accurate and ethical reporting is crucial to the quality of scientific research that is published. Unethical practices such as falsification of data and plagiarism cause long-term damage to the dependability of published literature. Whilst such practices do still exist, these can be prevented by having robust institutional ethical processes in place, regular training, and editorial vigilance.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.


  • Share full article

research ethics in study

A Columbia Surgeon’s Study Was Pulled. He Kept Publishing Flawed Data.

The quiet withdrawal of a 2021 cancer study by Dr. Sam Yoon highlights scientific publishers’ lack of transparency around data problems.

Supported by

Benjamin Mueller

By Benjamin Mueller

Benjamin Mueller covers medical science and has reported on several research scandals.

  • Feb. 15, 2024

The stomach cancer study was shot through with suspicious data. Identical constellations of cells were said to depict separate experiments on wholly different biological lineages. Photos of tumor-stricken mice, used to show that a drug reduced cancer growth, had been featured in two previous papers describing other treatments.

Problems with the study were severe enough that its publisher, after finding that the paper violated ethics guidelines, formally withdrew it within a few months of its publication in 2021. The study was then wiped from the internet, leaving behind a barren web page that said nothing about the reasons for its removal.

As it turned out, the flawed study was part of a pattern. Since 2008, two of its authors — Dr. Sam S. Yoon, chief of a cancer surgery division at Columbia University’s medical center, and a more junior cancer biologist — have collaborated with a rotating cast of researchers on a combined 26 articles that a British scientific sleuth has publicly flagged for containing suspect data. A medical journal retracted one of them this month after inquiries from The New York Times.

A person walks across a covered walkway connecting two buildings over a road with parked cars. A large, blue sign on the walkway says "Columbia University Irving Medical Center."

Memorial Sloan Kettering Cancer Center, where Dr. Yoon worked when much of the research was done, is now investigating the studies. Columbia’s medical center declined to comment on specific allegations, saying only that it reviews “any concerns about scientific integrity brought to our attention.”

Dr. Yoon, who has said his research could lead to better cancer treatments , did not answer repeated questions. Attempts to speak to the other researcher, Changhwan Yoon, an associate research scientist at Columbia, were also unsuccessful.

The allegations were aired in recent months in online comments on a science forum and in a blog post by Sholto David, an independent molecular biologist. He has ferreted out problems in a raft of high-profile cancer research , including dozens of papers at a Harvard cancer center that were subsequently referred for retractions or corrections.

From his flat in Wales , Dr. David pores over published images of cells, tumors and mice in his spare time and then reports slip-ups, trying to close the gap between people’s regard for academic research and the sometimes shoddier realities of the profession.

When evaluating scientific images, it is difficult to distinguish sloppy copy-and-paste errors from deliberate doctoring of data. Two other imaging experts who reviewed the allegations at the request of The Times said some of the discrepancies identified by Dr. David bore signs of manipulation, like flipped, rotated or seemingly digitally altered images.

Armed with A.I.-powered detection tools, scientists and bloggers have recently exposed a growing body of such questionable research, like the faulty papers at Harvard’s Dana-Farber Cancer Institute and studies by Stanford’s president that led to his resignation last year.

But those high-profile cases were merely the tip of the iceberg, experts said. A deeper pool of unreliable research has gone unaddressed for years, shielded in part by powerful scientific publishers driven to put out huge volumes of studies while avoiding the reputational damage of retracting them publicly.

The quiet removal of the 2021 stomach cancer study from Dr. Yoon’s lab, a copy of which was reviewed by The Times, illustrates how that system of scientific publishing has helped enable faulty research, experts said. In some cases, critical medical fields have remained seeded with erroneous studies.

“The journals do the bare minimum,” said Elisabeth Bik, a microbiologist and image expert who described Dr. Yoon’s papers as showing a worrisome pattern of copied or doctored data. “There’s no oversight.”

Memorial Sloan Kettering, where portions of the stomach cancer research were done, said no one — not the journal nor the researchers — had ever told administrators that the paper was withdrawn or why it had been. The study said it was supported in part by federal funding given to the cancer center.

Dr. Yoon, a stomach cancer specialist and a proponent of robotic surgery, kept climbing the academic ranks, bringing his junior researcher along with him. In September 2021, around the time the study was published, he joined Columbia, which celebrated his prolific research output in a news release . His work was financed in part by half a million dollars in federal research money that year, adding to a career haul of nearly $5 million in federal funds.

The decision by the stomach cancer study’s publisher, Elsevier, not to post an explanation for the paper’s removal made it less likely that the episode would draw public attention or affect the duo’s work. That very study continued to be cited in papers by other scientists .

And as recently as last year, Dr. Yoon’s lab published more studies containing identical images that were said to depict separate experiments, according to Dr. David’s analyses.

The researchers’ suspicious publications stretch back 16 years. Over time, relatively minor image copies in papers by Dr. Yoon gave way to more serious discrepancies in studies he collaborated on with Changhwan Yoon, Dr. David said. The pair, who are not related, began publishing articles together around 2013.

But neither their employers nor their publishers seemed to start investigating their work until this past fall, when Dr. David published his initial findings on For Better Science, a blog, and notified Memorial Sloan Kettering, Columbia and the journals. Memorial Sloan Kettering said it began its investigation then.

None of those flagged studies was retracted until last week. Three days after The Times asked publishers about the allegations, the journal Oncotarget retracted a 2016 study on combating certain pernicious cancers. In a retraction notice , the journal said the authors’ explanations for copied images “were deemed unacceptable.”

The belated action was symptomatic of what experts described as a broken system for policing scientific research.

A proliferation of medical journals, they said, has helped fuel demand for ever more research articles. But those same journals, many of them operated by multibillion-dollar publishing companies, often respond slowly or do nothing at all once one of those articles is shown to contain copied data. Journals retract papers at a fraction of the rate at which they publish ones with problems.

Springer Nature, which published nine of the articles that Dr. David said contained discrepancies across five journals, said it was investigating concerns. So did the American Association for Cancer Research, which published 10 articles under question from Dr. Yoon’s lab across four journals.

It is difficult to know who is responsible for errors in articles. Eleven of the scientists’ co-authors, including researchers at Harvard, Duke and Georgetown, did not answer emailed inquiries.

The articles under question examined why certain stomach and soft-tissue cancers withstood treatment, and how that resistance could be overcome.

The two independent image specialists said the volume of copied data, along with signs that some images had been rotated or similarly manipulated, suggested considerable sloppiness or worse.

“There are examples in this set that raise pretty serious red flags for the possibility of misconduct,” said Dr. Matthew Schrag, a Vanderbilt University neurologist who commented as part of his outside work on research integrity.

One set of 10 articles identified by Dr. David showed repeated reuse of identical or overlapping black-and-white images of cancer cells supposedly under different experimental conditions, he said.

“There’s no reason to have done that unless you weren’t doing the work,” Dr. David said.

One of those papers , published in 2012, was formally tagged with corrections. Unlike later studies, which were largely overseen by Dr. Yoon in New York, this paper was written by South Korea-based scientists, including Changhwan Yoon, who then worked in Seoul.

An immunologist in Norway randomly selected the paper as part of a screening of copied data in cancer journals. That led the paper’s publisher, the medical journal Oncogene, to add corrections in 2016.

But the journal did not catch all of the duplicated data , Dr. David said. And, he said, images from the study later turned up in identical form in another paper that remains uncorrected.

Copied cancer data kept recurring, Dr. David said. A picture of a small red tumor from a 2017 study reappeared in papers in 2020 and 2021 under different descriptions, he said. A ruler included in the pictures for scale wound up in two different positions.

The 2020 study included another tumor image that Dr. David said appeared to be a mirror image of one previously published by Dr. Yoon’s lab. And the 2021 study featured a color version of a tumor that had appeared in an earlier paper atop a different section of ruler, Dr. David said.

“This is another example where this looks intentionally done,” Dr. Bik said.

The researchers were faced with more serious action when the publisher Elsevier withdrew the stomach cancer study that had been published online in 2021. “The editors determined that the article violated journal publishing ethics guidelines,” Elsevier said.

Roland Herzog, the editor of Molecular Therapy, the journal where the article appeared, said that “image duplications were noticed” as part of a process of screening for discrepancies that the journal has since continued to beef up.

Because the problems were detected before the study was ever published in the print journal, Elsevier’s policy dictated that the article be taken down and no explanation posted online.

But that decision appeared to conflict with industry guidelines from the Committee on Publication Ethics . Posting articles online “usually constitutes publication,” those guidelines state. And when publishers pull such articles, the guidelines say, they should keep the work online for the sake of transparency and post “a clear notice of retraction.”

Dr. Herzog said he personally hoped that such an explanation could still be posted for the stomach cancer study. The journal editors and Elsevier, he said, are examining possible options.

The editors notified Dr. Yoon and Changhwan Yoon of the article’s removal, but neither scientist alerted Memorial Sloan Kettering, the hospital said. Columbia did not say whether it had been told.

Experts said the handling of the article was symptomatic of a tendency on the part of scientific publishers to obscure reports of lapses .

“This is typical, sweeping-things-under-the-rug kind of nonsense,” said Dr. Ivan Oransky, co-founder of Retraction Watch, which keeps a database of 47,000-plus retracted papers. “This is not good for the scientific record, to put it mildly.”

Susan C. Beachy contributed research.

Benjamin Mueller reports on health and medicine. He was previously a U.K. correspondent in London and a police reporter in New York. More about Benjamin Mueller


  • Skip to main content
  • Keyboard shortcuts for audio player

Shots - Health News

  • Your Health
  • Treatments & Tests
  • Health Inc.
  • Public Health

Reproductive rights in America

Research at the heart of a federal case against the abortion pill has been retracted.

Selena Simmons-Duffin

Selena Simmons-Duffin

research ethics in study

The Supreme Court will hear the case against the abortion pill mifepristone on March 26. It's part of a two-drug regimen with misoprostol for abortions in the first 10 weeks of pregnancy. Anna Moneymaker/Getty Images hide caption

The Supreme Court will hear the case against the abortion pill mifepristone on March 26. It's part of a two-drug regimen with misoprostol for abortions in the first 10 weeks of pregnancy.

A scientific paper that raised concerns about the safety of the abortion pill mifepristone was retracted by its publisher this week. The study was cited three times by a federal judge who ruled against mifepristone last spring. That case, which could limit access to mifepristone throughout the country, will soon be heard in the Supreme Court.

The now retracted study used Medicaid claims data to track E.R. visits by patients in the month after having an abortion. The study found a much higher rate of complications than similar studies that have examined abortion safety.

Sage, the publisher of the journal, retracted the study on Monday along with two other papers, explaining in a statement that "expert reviewers found that the studies demonstrate a lack of scientific rigor that invalidates or renders unreliable the authors' conclusions."

It also noted that most of the authors on the paper worked for the Charlotte Lozier Institute, the research arm of anti-abortion lobbying group Susan B. Anthony Pro-Life America, and that one of the original peer reviewers had also worked for the Lozier Institute.

The Sage journal, Health Services Research and Managerial Epidemiology , published all three research articles, which are still available online along with the retraction notice. In an email to NPR, a spokesperson for Sage wrote that the process leading to the retractions "was thorough, fair, and careful."

The lead author on the paper, James Studnicki, fiercely defends his work. "Sage is targeting us because we have been successful for a long period of time," he says on a video posted online this week . He asserts that the retraction has "nothing to do with real science and has everything to do with a political assassination of science."

He says that because the study's findings have been cited in legal cases like the one challenging the abortion pill, "we have become visible – people are quoting us. And for that reason, we are dangerous, and for that reason, they want to cancel our work," Studnicki says in the video.

In an email to NPR, a spokesperson for the Charlotte Lozier Institute said that they "will be taking appropriate legal action."

Role in abortion pill legal case

Anti-abortion rights groups, including a group of doctors, sued the federal Food and Drug Administration in 2022 over the approval of mifepristone, which is part of a two-drug regimen used in most medication abortions. The pill has been on the market for over 20 years, and is used in more than half abortions nationally. The FDA stands by its research that finds adverse events from mifepristone are extremely rare.

Judge Matthew Kacsmaryk, the district court judge who initially ruled on the case, pointed to the now-retracted study to support the idea that the anti-abortion rights physicians suing the FDA had the right to do so. "The associations' members have standing because they allege adverse events from chemical abortion drugs can overwhelm the medical system and place 'enormous pressure and stress' on doctors during emergencies and complications," he wrote in his decision, citing Studnicki. He ruled that mifepristone should be pulled from the market nationwide, although his decision never took effect.

research ethics in study

Matthew Kacsmaryk at his confirmation hearing for the federal bench in 2017. AP hide caption

Matthew Kacsmaryk at his confirmation hearing for the federal bench in 2017.

Kacsmaryk is a Trump appointee who was a vocal abortion opponent before becoming a federal judge.

"I don't think he would view the retraction as delegitimizing the research," says Mary Ziegler , a law professor and expert on the legal history of abortion at U.C. Davis. "There's been so much polarization about what the reality of abortion is on the right that I'm not sure how much a retraction would affect his reasoning."

Ziegler also doubts the retractions will alter much in the Supreme Court case, given its conservative majority. "We've already seen, when it comes to abortion, that the court has a propensity to look at the views of experts that support the results it wants," she says. The decision that overturned Roe v. Wade is an example, she says. "The majority [opinion] relied pretty much exclusively on scholars with some ties to pro-life activism and didn't really cite anybody else even or really even acknowledge that there was a majority scholarly position or even that there was meaningful disagreement on the subject."

In the mifepristone case, "there's a lot of supposition and speculation" in the argument about who has standing to sue, she explains. "There's a probability that people will take mifepristone and then there's a probability that they'll get complications and then there's a probability that they'll get treatment in the E.R. and then there's a probability that they'll encounter physicians with certain objections to mifepristone. So the question is, if this [retraction] knocks out one leg of the stool, does that somehow affect how the court is going to view standing? I imagine not."

It's impossible to know who will win the Supreme Court case, but Ziegler thinks that this retraction probably won't sway the outcome either way. "If the court is skeptical of standing because of all these aforementioned weaknesses, this is just more fuel to that fire," she says. "It's not as if this were an airtight case for standing and this was a potentially game-changing development."

Oral arguments for the case, Alliance for Hippocratic Medicine v. FDA , are scheduled for March 26 at the Supreme Court. A decision is expected by summer. Mifepristone remains available while the legal process continues.

  • Abortion policy
  • abortion pill
  • judge matthew kacsmaryk
  • mifepristone
  • retractions
  • Abortion rights
  • Supreme Court

Read our research on: Immigration & Migration | Podcasts | Election 2024

Regions & Countries

8 facts about black americans and the news.

research ethics in study

Black Americans have long had a complex relationship with the news media . In 1967, the  Kerner Commission  – a panel established by President Lyndon Johnson to investigate the causes of more than 150 urban riots in the United States – sharply criticized the media’s treatment of Black Americans.

More than 50 years later, there is  ongoing discussion  of  many of the themes raised  in the commission’s report. Amid these discussions, here are some key facts about Black Americans’ experiences with and attitudes toward the news, based on recent Pew Research Center surveys:

This analysis is based on several recent Pew Research Center surveys, including our 2023 study on Black Americans’ experiences with news . Details on the methodologies of these surveys, including field dates and sample sizes, can be found by following the links in this analysis.

Pew Research Center is a subsidiary of The Pew Charitable Trusts, its primary funder. This is the latest report in Pew Research Center’s ongoing investigation of the state of news, information and journalism in the digital age, a research program funded by The Pew Charitable Trusts, with generous support from the John S. and James L. Knight Foundation.

Black Americans are more likely than other racial and ethnic groups in the U.S. to get their news on TV.  About three-quarters of Black adults (76%) say they at least sometimes get news on TV , compared with 62% of both White and Hispanic adults and 52% of Asian adults. And 38% of Black Americans say they prefer to get their news on TV over any other platform – again higher than people of other racial or ethnic backgrounds.

Black Americans are more likely than White Americans to get news from certain social media sites. The shares of Black adults who say they regularly get news on YouTube (41%), Facebook (36%), Instagram (27%) and TikTok (22%) are each higher than the shares of White Americans who get news on these platforms. Like Americans overall, Black Americans get news from a wide variety of sources in addition to social media, including other digital platforms such as news websites and search engines.

Black Americans see issues with the way Black people are covered in the news, according to a 2023 survey . For example, 63% of Black adults say the news they see or hear about Black people is often more negative than the news about other racial and ethnic groups. And eight-in-ten say they at least sometimes see or hear news coverage about Black people that is racist or racially insensitive, including 39% who see such coverage extremely or fairly often.

We also asked Black Americans how likely it is that Black people will be covered fairly in the news in their lifetime. A relatively small share – 14% – see this as extremely or very likely.

A pie chart showing that most Black Americans say news about Black people is more negative than news about other groups.

Black Americans see a number of steps that could improve news coverage of Black people . For example, most Black adults say it is extremely or very important that journalists and reporters cover all sides of the issues (76%) and understand the history of the issues (73%) when covering Black people. Many also say it is crucial for journalists to personally engage with the people they cover (59%) and to advocate for Black people (48%).

A bar chart showing that Black Americans say journalists should cover all sides, understand history when they cover Black people.

Among Black Americans who say they at least sometimes see racist or racially insensitive news coverage of Black people, 64% say educating all journalists about issues impacting Black people would be highly effective in making coverage more fair. Substantial shares also say more representation would help – such as including more Black people as sources in news stories or hiring them at news outlets for leadership roles or as journalists and reporters.

Black Americans tend to be underrepresented in U.S. newsrooms. Just 6% of reporting journalists are Black, according to a 2022 Pew Research Center survey of U.S. journalists . That is well below the Black share of U.S. workers (11%) and adults overall (12%).

About half of all U.S. journalists (52%) say their news organization does not have enough diversity when it comes to race and ethnicity. That is much larger than the shares of journalists who say the same about gender, sexual orientation and other aspects of diversity.

There is more proportional representation by race and ethnicity in  local TV newsrooms , according to the Radio Television Digital News Association . It found in 2022 that 13% of local TV newsroom employees are African American. However, only 6% of news directors – the leaders of such newsrooms – are Black.

Many Black Americans say it’s important to get news about race and racial inequality from Black journalists. But fewer feel this way when it comes to news in general. Four-in-ten Black Americans say it’s extremely or very important that the news they get about race and racial inequality comes from Black journalists. A much smaller share (14%) say it’s highly important that the news they get in general – regardless of topic – comes from Black reporters.

A bar chart showing that 40% of Black Americans say it’s crucial for news about race to come from Black reporters, but far fewer say the same about news in general.

Similarly, just 15% of Black Americans say that whether a journalist is Black is extremely or very important to deciding if a news story in general is trustworthy. Black Americans are much more likely to see other factors as highly important when assessing the trustworthiness of a news story. These factors include the sources cited in the story, the news outlet that covers the story, whether the story is reported by multiple outlets, and their own gut instinct.

About a quarter of Black Americans (24%) say they extremely or fairly often get news from Black news outlets . These outlets, which have a long history in the U.S. , are defined as those created by Black people and focused on providing news and information specifically to Black audiences. Another 40% of Black adults say they sometimes get news from such outlets.

A pie chart showing that About a quarter of Black adults often get news from Black news outlets.

Black Americans are more likely than other racial and ethnic groups to feel that the news media misunderstand them because of their race or some other demographic trait. Roughly similar portions of Americans who are White (61%), Black (58%) and Hispanic (55%) say the news media misunderstand them , but they cite markedly different reasons for this misunderstanding.

Among Black adults who feel this way, about a third (34%) say that what news organizations misunderstand about them most is their personal characteristics. This is far higher than the 10% of White adults and 17% of Hispanic adults who say the same. (The survey included Asian Americans, but the sample size for this group is too small to analyze separately.)

Note: This is an update of a post originally published on Aug. 7, 2019.

research ethics in study

Sign up for our weekly newsletter

Fresh data delivered Saturday mornings

Key facts about the nation’s 47.9 million Black Americans

Facts about the u.s. black population, african immigrants in u.s. more religious than other black americans, and more likely to be catholic, across religious groups, a majority of black americans say opposing racism is an essential part of their faith, most popular.

About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable Trusts .


  1. Research Ethics: Definition, Principles and Advantages

    research ethics in study

  2. Research Ethics: Definition, Principles and Advantages

    research ethics in study

  3. What is the Importance of Ethics in Research? 8 Reasons Explained

    research ethics in study

  4. PPT

    research ethics in study

  5. PPT

    research ethics in study

  6. Research Ethics

    research ethics in study


  1. TREATs Talk: "Ethics Issues in Design and Testing of Medical Technologies"

  2. A Workshop on "Ethics in Research & Publication" to be held at Sultan Qaboos University


  1. Ethical Considerations in Research

    Ethical considerations in research are a set of principles that guide your research designs and practices. Scientists and researchers must always adhere to a certain code of conduct when collecting data from people.

  2. What Is Ethics in Research and Why Is It Important?

    One may also define ethics as a method, procedure, or perspective for deciding how to act and for analyzing complex problems and issues. For instance, in considering a complex issue like global warming, one may take an economic, ecological, political, or ethical perspective on the problem.

  3. Research Ethics

    Research Ethics Jennifer M. Barrow; Grace D. Brannan; Paras B. Khandhar. Author Information and Affiliations Last Update: September 18, 2022. Go to: Introduction Multiple examples of past unethical research studies conducted in the past throughout the world have cast a significant historical shadow on research involving human subjects.

  4. Ethical Issues in Research: Perceptions of Researchers, Research Ethics

    According to Sieber ( 2004 ), ethical issues in research can be classified into five categories, related to: (a) communication with participants and the community, (b) acquisition and use of research data, (c) external influence on research, (d) risks and benefits of the research, and (e) selection and use of research theories and methods.

  5. Principles of research ethics: A research primer for low- and middle

    This paper describes the basic principles of Western research ethics - respect for persons, beneficence, and justice - and how the principles may be contextualized in different settings, by researchers of various backgrounds with different funding streams. Examples of lapses in ethical practice of research are used to highlight best practices.

  6. Guiding Principles for Ethical Research

    NIH Clinical Center researchers published seven main principles to guide the conduct of ethical research: Social and clinical value Scientific validity Fair subject selection Favorable risk-benefit ratio Independent review Informed consent Respect for potential and enrolled subjects Social and clinical value

  7. Understanding Scientific and Research Ethics

    Reputable journals screen for ethics at submission—and inability to pass ethics checks is one of the most common reasons for rejection. Unfortunately, once a study has begun, it's often too late to secure the requisite ethical reviews and clearances. Learn how to prepare for publication success by ensuring your study meets all ethical requirements before work begins.

  8. Ethics in Scientific Research

    Scientific research ethics vary by discipline and by country, and this analysis sought to understand those variations. The authors reviewed literature and conducted interviews to provide researchers, government officials, and others who create, modify, and enforce ethics in scientific research around the world with an understanding of how ethics are created, monitored, and enforced across ...

  9. Understanding Research Ethics

    Research ethics are moral principles that need to be adhered to when conducting a research study as well as when writing a scientific article, with the prime aim of avoiding deception or intent to harm study's participants, the scientific community, and society.

  10. PDF What is Ethics in Research & Why is it Important?

    This is the most common way of defining "ethics": norms for conduct that distinguish between acceptable and unacceptable behavior. Most people learn ethical norms at home, at school, in church, or in other social settings.

  11. Introduction: What is Research Ethics?

    About the RCREC What is Research Ethics? Research Ethics is defined here to be the ethics of the planning, conduct, and reporting of research. It is clear that research ethics should include: Protections of human and animal subjects

  12. The Ethics of Research, Writing, and Publication

    According to Resnik (2011), many people think of ethics as a set of rules distinguishing right from wrong, but actually the term "ethics" refers to norms of conduct or of action and in disciplines of study. Research ethics or norms promote the "knowledge, truth, and avoidance of error" (p. 1) and protect against "fabricating ...

  13. Research Ethics: Sage Journals

    SUBMIT PAPER. Research Ethics is aimed at all readers and authors interested in ethical issues in the conduct of research, the regulation of research, the procedures and process of ethical review as well as broader ethical issues related to research such as scientific … |. This journal is a member of the Committee on Publication Ethics (COPE).

  14. Ethical Dilemmas in Qualitative Research: A Critical Literature Review

    The ethical principles that guide a scientific research are based on the assurance of human freedom and dignity. They are expressed in ethical codes and guidelines used by Research Ethics Committees (RECs), which are the instances that regulate research conducted with human beings.

  15. Research Ethics: Definition, Principles and Advantages

    Research ethics are the set of ethical guidelines that guides us on how scientific research should be conducted and disseminated. Research ethics govern the standards of conduct for scientific researchers It is the guideline for responsibly conducting the research.

  16. Ethical Issues in Research

    Research ethics guide researchers conducting any research, educate, and monitor scientists to ensure a high ethical standard in research. Major ethical requirements of scientific study include five fundamental ethical principles, which needs to be considered at the research planning and designing phases: first, minimizing the risk of harm to the participants; second, securing informed consent ...

  17. Full article: Principles for ethical research involving humans: ethical

    Morality, ethics and ethical practice. Ethics, or moral philosophy, is a branch of philosophy that seeks to address questions of morality. Morality refers to beliefs or standards about concepts like good and bad, right and wrong (Jennings Citation 2003).When used as the basis for guiding individual and collective behaviour, ethics takes on a normative function, helping individuals consider how ...

  18. Five principles for research ethics

    Five principles for research ethics. Monitor on Psychology, 34 (1). https://www.apa.org/monitor/jan03/principles Not that long ago, academicians were often cautious about airing the ethical dilemmas they faced in their research and academic work, but that environment is changing today.

  19. Ethical Considerations in Psychology Research

    Ethics refers to the correct rules of conduct necessary when carrying out research. We have a moral responsibility to protect research participants from harm. However important the issue under investigation, psychologists must remember that they have a duty to respect the rights and dignity of research participants.

  20. How the Classics Changed Research Ethics

    How the Classics Changed Research Ethics. Some of history's most controversial psychology studies helped drive extensive protections for human research participants. Some say those reforms went too far. Photo above: In 1971, APS Fellow Philip Zimbardo halted his classic prison simulation at Stanford after volunteer "guards" became abusive ...

  21. Ethical AI governance: mapping a research ecosystem

    Research Ethics Committees (RECs) have a well-established history in ethics governance, but there have been concerns about their capacity to adequately govern AI research. However, no study to date has examined the ways that AI-related projects engage with the ethics ecosystem, or its adequacy for this context. This paper analysed a single ...

  22. Ethics in Research and Publication

    The study design should be clearly expressed in a written protocol. In clinical studies, the number of participants to be included in the analysis should be sufficiently large to give a definitive result. Local ethical research committees should hold back approval until the deficiencies in the design of the study have been corrected.

  23. Protocol for a scoping review to identify research reporting on eating

    Ethics and dissemination No ethical approval is required for this study. Findings will be of benefit to researchers, clinicians and policy-makers by highlighting areas for future research and identifying ways to making ED treatment more accessible for individuals from all backgrounds.

  24. A Columbia Surgeon's Study Was Pulled. He Kept Publishing Flawed Data

    Problems with the study were severe enough that its publisher, after finding that the paper violated ethics guidelines, formally withdrew it within a few months of its publication in 2021.

  25. Newly funded study aimed at reducing risk, improving interventions for

    Researchers led by Dr. Jaime Modiano from the College of Veterinary Medicine (CMV) and Aaron Sarver from the Masonic Cancer Center received a $500,000 grant from the Morris Animal Foundation to study new approaches to reducing the impact of lymphoma on companion dogs. Cancer is a major cause of death in dogs, with malignant lymphoma being a significant contributor.

  26. Teacher training programs drop the ball on reading. : NPR

    The 2023 study from NCTQ found 40% of surveyed schools are still teaching methods that "run counter to the research on effective reading instruction." How teaching programs adopt "science of ...

  27. Does study counselling foster STEM intentions and reduce the STEM

    This study analyses whether study counselling - guiding students to select a major in higher education through self-exploration and fostering their confidence in completing higher education - increases their aspirations toward STEM. ... This research project was approved by the Ethics Committee of the Faculty of Management, Economics and ...

  28. The abortion pill case on its way to the Supreme Court cites a

    A research paper that raises questions about the safety of abortion has been retracted. The research is cited in a federal judge's ruling about the abortion pill mifepristone.

  29. Gen Z and the Future of Consumer Behaviour

    Gen Z is the super-snacking generation and our research shows that a quarter of them snack more than once a day, while many have little affinity for the traditional rules of three meals a day. Gen Z's snacking habits represent an opportunity for brands to target them with smaller, exciting bites that are designed for different dayparts and moods.

  30. 8 facts about Black Americans and the news

    Black Americans are more likely than other racial and ethnic groups in the U.S. to get their news on TV. About three-quarters of Black adults (76%) say they at least sometimes get news on TV, compared with 62% of both White and Hispanic adults and 52% of Asian adults.And 38% of Black Americans say they prefer to get their news on TV over any other platform - again higher than people of other ...