Random Assignment in Psychology: Definition & Examples

Julia Simkus

Editor at Simply Psychology

BA (Hons) Psychology, Princeton University

Julia Simkus is a graduate of Princeton University with a Bachelor of Arts in Psychology. She is currently studying for a Master's Degree in Counseling for Mental Health and Wellness in September 2023. Julia's research has been published in peer reviewed journals.

Learn about our Editorial Process

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

In psychology, random assignment refers to the practice of allocating participants to different experimental groups in a study in a completely unbiased way, ensuring each participant has an equal chance of being assigned to any group.

In experimental research, random assignment, or random placement, organizes participants from your sample into different groups using randomization. 

Random assignment uses chance procedures to ensure that each participant has an equal opportunity of being assigned to either a control or experimental group.

The control group does not receive the treatment in question, whereas the experimental group does receive the treatment.

When using random assignment, neither the researcher nor the participant can choose the group to which the participant is assigned. This ensures that any differences between and within the groups are not systematic at the onset of the study. 

In a study to test the success of a weight-loss program, investigators randomly assigned a pool of participants to one of two groups.

Group A participants participated in the weight-loss program for 10 weeks and took a class where they learned about the benefits of healthy eating and exercise.

Group B participants read a 200-page book that explains the benefits of weight loss. The investigator randomly assigned participants to one of the two groups.

The researchers found that those who participated in the program and took the class were more likely to lose weight than those in the other group that received only the book.

Importance 

Random assignment ensures that each group in the experiment is identical before applying the independent variable.

In experiments , researchers will manipulate an independent variable to assess its effect on a dependent variable, while controlling for other variables. Random assignment increases the likelihood that the treatment groups are the same at the onset of a study.

Thus, any changes that result from the independent variable can be assumed to be a result of the treatment of interest. This is particularly important for eliminating sources of bias and strengthening the internal validity of an experiment.

Random assignment is the best method for inferring a causal relationship between a treatment and an outcome.

Random Selection vs. Random Assignment 

Random selection (also called probability sampling or random sampling) is a way of randomly selecting members of a population to be included in your study.

On the other hand, random assignment is a way of sorting the sample participants into control and treatment groups. 

Random selection ensures that everyone in the population has an equal chance of being selected for the study. Once the pool of participants has been chosen, experimenters use random assignment to assign participants into groups. 

Random assignment is only used in between-subjects experimental designs, while random selection can be used in a variety of study designs.

Random Assignment vs Random Sampling

Random sampling refers to selecting participants from a population so that each individual has an equal chance of being chosen. This method enhances the representativeness of the sample.

Random assignment, on the other hand, is used in experimental designs once participants are selected. It involves allocating these participants to different experimental groups or conditions randomly.

This helps ensure that any differences in results across groups are due to manipulating the independent variable, not preexisting differences among participants.

When to Use Random Assignment

Random assignment is used in experiments with a between-groups or independent measures design.

In these research designs, researchers will manipulate an independent variable to assess its effect on a dependent variable, while controlling for other variables.

There is usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable at the onset of the study.

How to Use Random Assignment

There are a variety of ways to assign participants into study groups randomly. Here are a handful of popular methods: 

  • Random Number Generator : Give each member of the sample a unique number; use a computer program to randomly generate a number from the list for each group.
  • Lottery : Give each member of the sample a unique number. Place all numbers in a hat or bucket and draw numbers at random for each group.
  • Flipping a Coin : Flip a coin for each participant to decide if they will be in the control group or experimental group (this method can only be used when you have just two groups) 
  • Roll a Die : For each number on the list, roll a dice to decide which of the groups they will be in. For example, assume that rolling 1, 2, or 3 places them in a control group and rolling 3, 4, 5 lands them in an experimental group.

When is Random Assignment not used?

  • When it is not ethically permissible: Randomization is only ethical if the researcher has no evidence that one treatment is superior to the other or that one treatment might have harmful side effects. 
  • When answering non-causal questions : If the researcher is just interested in predicting the probability of an event, the causal relationship between the variables is not important and observational designs would be more suitable than random assignment. 
  • When studying the effect of variables that cannot be manipulated: Some risk factors cannot be manipulated and so it would not make any sense to study them in a randomized trial. For example, we cannot randomly assign participants into categories based on age, gender, or genetic factors.

Drawbacks of Random Assignment

While randomization assures an unbiased assignment of participants to groups, it does not guarantee the equality of these groups. There could still be extraneous variables that differ between groups or group differences that arise from chance. Additionally, there is still an element of luck with random assignments.

Thus, researchers can not produce perfectly equal groups for each specific study. Differences between the treatment group and control group might still exist, and the results of a randomized trial may sometimes be wrong, but this is absolutely okay.

Scientific evidence is a long and continuous process, and the groups will tend to be equal in the long run when data is aggregated in a meta-analysis.

Additionally, external validity (i.e., the extent to which the researcher can use the results of the study to generalize to the larger population) is compromised with random assignment.

Random assignment is challenging to implement outside of controlled laboratory conditions and might not represent what would happen in the real world at the population level. 

Random assignment can also be more costly than simple observational studies, where an investigator is just observing events without intervening with the population.

Randomization also can be time-consuming and challenging, especially when participants refuse to receive the assigned treatment or do not adhere to recommendations. 

What is the difference between random sampling and random assignment?

Random sampling refers to randomly selecting a sample of participants from a population. Random assignment refers to randomly assigning participants to treatment groups from the selected sample.

Does random assignment increase internal validity?

Yes, random assignment ensures that there are no systematic differences between the participants in each group, enhancing the study’s internal validity .

Does random assignment reduce sampling error?

Yes, with random assignment, participants have an equal chance of being assigned to either a control group or an experimental group, resulting in a sample that is, in theory, representative of the population.

Random assignment does not completely eliminate sampling error because a sample only approximates the population from which it is drawn. However, random sampling is a way to minimize sampling errors. 

When is random assignment not possible?

Random assignment is not possible when the experimenters cannot control the treatment or independent variable.

For example, if you want to compare how men and women perform on a test, you cannot randomly assign subjects to these groups.

Participants are not randomly assigned to different groups in this study, but instead assigned based on their characteristics.

Does random assignment eliminate confounding variables?

Yes, random assignment eliminates the influence of any confounding variables on the treatment because it distributes them at random among the study groups. Randomization invalidates any relationship between a confounding variable and the treatment.

Why is random assignment of participants to treatment conditions in an experiment used?

Random assignment is used to ensure that all groups are comparable at the start of a study. This allows researchers to conclude that the outcomes of the study can be attributed to the intervention at hand and to rule out alternative explanations for study results.

Further Reading

  • Bogomolnaia, A., & Moulin, H. (2001). A new solution to the random assignment problem .  Journal of Economic theory ,  100 (2), 295-328.
  • Krause, M. S., & Howard, K. I. (2003). What random assignment does and does not do .  Journal of Clinical Psychology ,  59 (7), 751-766.

Print Friendly, PDF & Email

Related Articles

Qualitative Data Coding

Research Methodology

Qualitative Data Coding

What Is a Focus Group?

What Is a Focus Group?

Cross-Cultural Research Methodology In Psychology

Cross-Cultural Research Methodology In Psychology

What Is Internal Validity In Research?

What Is Internal Validity In Research?

What Is Face Validity In Research? Importance & How To Measure

Research Methodology , Statistics

What Is Face Validity In Research? Importance & How To Measure

Criterion Validity: Definition & Examples

Criterion Validity: Definition & Examples

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

The Definition of Random Assignment According to Psychology

Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

the purpose of randomization or random assignment

Emily is a board-certified science editor who has worked with top digital publishing brands like Voices for Biodiversity, Study.com, GoodTherapy, Vox, and Verywell.

the purpose of randomization or random assignment

Materio / Getty Images

Random assignment refers to the use of chance procedures in psychology experiments to ensure that each participant has the same opportunity to be assigned to any given group in a study to eliminate any potential bias in the experiment at the outset. Participants are randomly assigned to different groups, such as the treatment group versus the control group. In clinical research, randomized clinical trials are known as the gold standard for meaningful results.

Simple random assignment techniques might involve tactics such as flipping a coin, drawing names out of a hat, rolling dice, or assigning random numbers to a list of participants. It is important to note that random assignment differs from random selection .

While random selection refers to how participants are randomly chosen from a target population as representatives of that population, random assignment refers to how those chosen participants are then assigned to experimental groups.

Random Assignment In Research

To determine if changes in one variable will cause changes in another variable, psychologists must perform an experiment. Random assignment is a critical part of the experimental design that helps ensure the reliability of the study outcomes.

Researchers often begin by forming a testable hypothesis predicting that one variable of interest will have some predictable impact on another variable.

The variable that the experimenters will manipulate in the experiment is known as the independent variable , while the variable that they will then measure for different outcomes is known as the dependent variable. While there are different ways to look at relationships between variables, an experiment is the best way to get a clear idea if there is a cause-and-effect relationship between two or more variables.

Once researchers have formulated a hypothesis, conducted background research, and chosen an experimental design, it is time to find participants for their experiment. How exactly do researchers decide who will be part of an experiment? As mentioned previously, this is often accomplished through something known as random selection.

Random Selection

In order to generalize the results of an experiment to a larger group, it is important to choose a sample that is representative of the qualities found in that population. For example, if the total population is 60% female and 40% male, then the sample should reflect those same percentages.

Choosing a representative sample is often accomplished by randomly picking people from the population to be participants in a study. Random selection means that everyone in the group stands an equal chance of being chosen to minimize any bias. Once a pool of participants has been selected, it is time to assign them to groups.

By randomly assigning the participants into groups, the experimenters can be fairly sure that each group will have the same characteristics before the independent variable is applied.

Participants might be randomly assigned to the control group , which does not receive the treatment in question. The control group may receive a placebo or receive the standard treatment. Participants may also be randomly assigned to the experimental group , which receives the treatment of interest. In larger studies, there can be multiple treatment groups for comparison.

There are simple methods of random assignment, like rolling the die. However, there are more complex techniques that involve random number generators to remove any human error.

There can also be random assignment to groups with pre-established rules or parameters. For example, if you want to have an equal number of men and women in each of your study groups, you might separate your sample into two groups (by sex) before randomly assigning each of those groups into the treatment group and control group.

Random assignment is essential because it increases the likelihood that the groups are the same at the outset. With all characteristics being equal between groups, other than the application of the independent variable, any differences found between group outcomes can be more confidently attributed to the effect of the intervention.

Example of Random Assignment

Imagine that a researcher is interested in learning whether or not drinking caffeinated beverages prior to an exam will improve test performance. After randomly selecting a pool of participants, each person is randomly assigned to either the control group or the experimental group.

The participants in the control group consume a placebo drink prior to the exam that does not contain any caffeine. Those in the experimental group, on the other hand, consume a caffeinated beverage before taking the test.

Participants in both groups then take the test, and the researcher compares the results to determine if the caffeinated beverage had any impact on test performance.

A Word From Verywell

Random assignment plays an important role in the psychology research process. Not only does this process help eliminate possible sources of bias, but it also makes it easier to generalize the results of a tested sample of participants to a larger population.

Random assignment helps ensure that members of each group in the experiment are the same, which means that the groups are also likely more representative of what is present in the larger population of interest. Through the use of this technique, psychology researchers are able to study complex phenomena and contribute to our understanding of the human mind and behavior.

Lin Y, Zhu M, Su Z. The pursuit of balance: An overview of covariate-adaptive randomization techniques in clinical trials . Contemp Clin Trials. 2015;45(Pt A):21-25. doi:10.1016/j.cct.2015.07.011

Sullivan L. Random assignment versus random selection . In: The SAGE Glossary of the Social and Behavioral Sciences. SAGE Publications, Inc.; 2009. doi:10.4135/9781412972024.n2108

Alferes VR. Methods of Randomization in Experimental Design . SAGE Publications, Inc.; 2012. doi:10.4135/9781452270012

Nestor PG, Schutt RK. Research Methods in Psychology: Investigating Human Behavior. (2nd Ed.). SAGE Publications, Inc.; 2015.

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

Chapter 6: Experimental Research

6.2 experimental design, learning objectives.

  • Explain the difference between between-subjects and within-subjects experiments, list some of the pros and cons of each approach, and decide which approach to use to answer a particular research question.
  • Define random assignment, distinguish it from random sampling, explain its purpose in experimental research, and use some simple strategies to implement it.
  • Define what a control condition is, explain its purpose in research on treatment effectiveness, and describe some alternative types of control conditions.
  • Define several types of carryover effect, give examples of each, and explain how counterbalancing helps to deal with them.

In this section, we look at some different ways to design an experiment. The primary distinction we will make is between approaches in which each participant experiences one level of the independent variable and approaches in which each participant experiences all levels of the independent variable. The former are called between-subjects experiments and the latter are called within-subjects experiments.

Between-Subjects Experiments

In a between-subjects experiment , each participant is tested in only one condition. For example, a researcher with a sample of 100 college students might assign half of them to write about a traumatic event and the other half write about a neutral event. Or a researcher with a sample of 60 people with severe agoraphobia (fear of open spaces) might assign 20 of them to receive each of three different treatments for that disorder. It is essential in a between-subjects experiment that the researcher assign participants to conditions so that the different groups are, on average, highly similar to each other. Those in a trauma condition and a neutral condition, for example, should include a similar proportion of men and women, and they should have similar average intelligence quotients (IQs), similar average levels of motivation, similar average numbers of health problems, and so on. This is a matter of controlling these extraneous participant variables across conditions so that they do not become confounding variables.

Random Assignment

The primary way that researchers accomplish this kind of control of extraneous variables across conditions is called random assignment , which means using a random process to decide which participants are tested in which conditions. Do not confuse random assignment with random sampling. Random sampling is a method for selecting a sample from a population, and it is rarely used in psychological research. Random assignment is a method for assigning participants in a sample to the different conditions, and it is an important element of all experimental research in psychology and other fields too.

In its strictest sense, random assignment should meet two criteria. One is that each participant has an equal chance of being assigned to each condition (e.g., a 50% chance of being assigned to each of two conditions). The second is that each participant is assigned to a condition independently of other participants. Thus one way to assign participants to two conditions would be to flip a coin for each one. If the coin lands heads, the participant is assigned to Condition A, and if it lands tails, the participant is assigned to Condition B. For three conditions, one could use a computer to generate a random integer from 1 to 3 for each participant. If the integer is 1, the participant is assigned to Condition A; if it is 2, the participant is assigned to Condition B; and if it is 3, the participant is assigned to Condition C. In practice, a full sequence of conditions—one for each participant expected to be in the experiment—is usually created ahead of time, and each new participant is assigned to the next condition in the sequence as he or she is tested. When the procedure is computerized, the computer program often handles the random assignment.

One problem with coin flipping and other strict procedures for random assignment is that they are likely to result in unequal sample sizes in the different conditions. Unequal sample sizes are generally not a serious problem, and you should never throw away data you have already collected to achieve equal sample sizes. However, for a fixed number of participants, it is statistically most efficient to divide them into equal-sized groups. It is standard practice, therefore, to use a kind of modified random assignment that keeps the number of participants in each group as similar as possible. One approach is block randomization . In block randomization, all the conditions occur once in the sequence before any of them is repeated. Then they all occur again before any of them is repeated again. Within each of these “blocks,” the conditions occur in a random order. Again, the sequence of conditions is usually generated before any participants are tested, and each new participant is assigned to the next condition in the sequence. Table 6.2 “Block Randomization Sequence for Assigning Nine Participants to Three Conditions” shows such a sequence for assigning nine participants to three conditions. The Research Randomizer website ( http://www.randomizer.org ) will generate block randomization sequences for any number of participants and conditions. Again, when the procedure is computerized, the computer program often handles the block randomization.

Table 6.2 Block Randomization Sequence for Assigning Nine Participants to Three Conditions

Random assignment is not guaranteed to control all extraneous variables across conditions. It is always possible that just by chance, the participants in one condition might turn out to be substantially older, less tired, more motivated, or less depressed on average than the participants in another condition. However, there are some reasons that this is not a major concern. One is that random assignment works better than one might expect, especially for large samples. Another is that the inferential statistics that researchers use to decide whether a difference between groups reflects a difference in the population takes the “fallibility” of random assignment into account. Yet another reason is that even if random assignment does result in a confounding variable and therefore produces misleading results, this is likely to be detected when the experiment is replicated. The upshot is that random assignment to conditions—although not infallible in terms of controlling extraneous variables—is always considered a strength of a research design.

Treatment and Control Conditions

Between-subjects experiments are often used to determine whether a treatment works. In psychological research, a treatment is any intervention meant to change people’s behavior for the better. This includes psychotherapies and medical treatments for psychological disorders but also interventions designed to improve learning, promote conservation, reduce prejudice, and so on. To determine whether a treatment works, participants are randomly assigned to either a treatment condition , in which they receive the treatment, or a control condition , in which they do not receive the treatment. If participants in the treatment condition end up better off than participants in the control condition—for example, they are less depressed, learn faster, conserve more, express less prejudice—then the researcher can conclude that the treatment works. In research on the effectiveness of psychotherapies and medical treatments, this type of experiment is often called a randomized clinical trial .

There are different types of control conditions. In a no-treatment control condition , participants receive no treatment whatsoever. One problem with this approach, however, is the existence of placebo effects. A placebo is a simulated treatment that lacks any active ingredient or element that should make it effective, and a placebo effect is a positive effect of such a treatment. Many folk remedies that seem to work—such as eating chicken soup for a cold or placing soap under the bedsheets to stop nighttime leg cramps—are probably nothing more than placebos. Although placebo effects are not well understood, they are probably driven primarily by people’s expectations that they will improve. Having the expectation to improve can result in reduced stress, anxiety, and depression, which can alter perceptions and even improve immune system functioning (Price, Finniss, & Benedetti, 2008).

Placebo effects are interesting in their own right (see Note 6.28 “The Powerful Placebo” ), but they also pose a serious problem for researchers who want to determine whether a treatment works. Figure 6.2 “Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions” shows some hypothetical results in which participants in a treatment condition improved more on average than participants in a no-treatment control condition. If these conditions (the two leftmost bars in Figure 6.2 “Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions” ) were the only conditions in this experiment, however, one could not conclude that the treatment worked. It could be instead that participants in the treatment group improved more because they expected to improve, while those in the no-treatment control condition did not.

Figure 6.2 Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions

Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions

Fortunately, there are several solutions to this problem. One is to include a placebo control condition , in which participants receive a placebo that looks much like the treatment but lacks the active ingredient or element thought to be responsible for the treatment’s effectiveness. When participants in a treatment condition take a pill, for example, then those in a placebo control condition would take an identical-looking pill that lacks the active ingredient in the treatment (a “sugar pill”). In research on psychotherapy effectiveness, the placebo might involve going to a psychotherapist and talking in an unstructured way about one’s problems. The idea is that if participants in both the treatment and the placebo control groups expect to improve, then any improvement in the treatment group over and above that in the placebo control group must have been caused by the treatment and not by participants’ expectations. This is what is shown by a comparison of the two outer bars in Figure 6.2 “Hypothetical Results From a Study Including Treatment, No-Treatment, and Placebo Conditions” .

Of course, the principle of informed consent requires that participants be told that they will be assigned to either a treatment or a placebo control condition—even though they cannot be told which until the experiment ends. In many cases the participants who had been in the control condition are then offered an opportunity to have the real treatment. An alternative approach is to use a waitlist control condition , in which participants are told that they will receive the treatment but must wait until the participants in the treatment condition have already received it. This allows researchers to compare participants who have received the treatment with participants who are not currently receiving it but who still expect to improve (eventually). A final solution to the problem of placebo effects is to leave out the control condition completely and compare any new treatment with the best available alternative treatment. For example, a new treatment for simple phobia could be compared with standard exposure therapy. Because participants in both conditions receive a treatment, their expectations about improvement should be similar. This approach also makes sense because once there is an effective treatment, the interesting question about a new treatment is not simply “Does it work?” but “Does it work better than what is already available?”

The Powerful Placebo

Many people are not surprised that placebos can have a positive effect on disorders that seem fundamentally psychological, including depression, anxiety, and insomnia. However, placebos can also have a positive effect on disorders that most people think of as fundamentally physiological. These include asthma, ulcers, and warts (Shapiro & Shapiro, 1999). There is even evidence that placebo surgery—also called “sham surgery”—can be as effective as actual surgery.

Medical researcher J. Bruce Moseley and his colleagues conducted a study on the effectiveness of two arthroscopic surgery procedures for osteoarthritis of the knee (Moseley et al., 2002). The control participants in this study were prepped for surgery, received a tranquilizer, and even received three small incisions in their knees. But they did not receive the actual arthroscopic surgical procedure. The surprising result was that all participants improved in terms of both knee pain and function, and the sham surgery group improved just as much as the treatment groups. According to the researchers, “This study provides strong evidence that arthroscopic lavage with or without débridement [the surgical procedures used] is not better than and appears to be equivalent to a placebo procedure in improving knee pain and self-reported function” (p. 85).

Doctors treating a patient in Surgery

Research has shown that patients with osteoarthritis of the knee who receive a “sham surgery” experience reductions in pain and improvement in knee function similar to those of patients who receive a real surgery.

Army Medicine – Surgery – CC BY 2.0.

Within-Subjects Experiments

In a within-subjects experiment , each participant is tested under all conditions. Consider an experiment on the effect of a defendant’s physical attractiveness on judgments of his guilt. Again, in a between-subjects experiment, one group of participants would be shown an attractive defendant and asked to judge his guilt, and another group of participants would be shown an unattractive defendant and asked to judge his guilt. In a within-subjects experiment, however, the same group of participants would judge the guilt of both an attractive and an unattractive defendant.

The primary advantage of this approach is that it provides maximum control of extraneous participant variables. Participants in all conditions have the same mean IQ, same socioeconomic status, same number of siblings, and so on—because they are the very same people. Within-subjects experiments also make it possible to use statistical procedures that remove the effect of these extraneous participant variables on the dependent variable and therefore make the data less “noisy” and the effect of the independent variable easier to detect. We will look more closely at this idea later in the book.

Carryover Effects and Counterbalancing

The primary disadvantage of within-subjects designs is that they can result in carryover effects. A carryover effect is an effect of being tested in one condition on participants’ behavior in later conditions. One type of carryover effect is a practice effect , where participants perform a task better in later conditions because they have had a chance to practice it. Another type is a fatigue effect , where participants perform a task worse in later conditions because they become tired or bored. Being tested in one condition can also change how participants perceive stimuli or interpret their task in later conditions. This is called a context effect . For example, an average-looking defendant might be judged more harshly when participants have just judged an attractive defendant than when they have just judged an unattractive defendant. Within-subjects experiments also make it easier for participants to guess the hypothesis. For example, a participant who is asked to judge the guilt of an attractive defendant and then is asked to judge the guilt of an unattractive defendant is likely to guess that the hypothesis is that defendant attractiveness affects judgments of guilt. This could lead the participant to judge the unattractive defendant more harshly because he thinks this is what he is expected to do. Or it could make participants judge the two defendants similarly in an effort to be “fair.”

Carryover effects can be interesting in their own right. (Does the attractiveness of one person depend on the attractiveness of other people that we have seen recently?) But when they are not the focus of the research, carryover effects can be problematic. Imagine, for example, that participants judge the guilt of an attractive defendant and then judge the guilt of an unattractive defendant. If they judge the unattractive defendant more harshly, this might be because of his unattractiveness. But it could be instead that they judge him more harshly because they are becoming bored or tired. In other words, the order of the conditions is a confounding variable. The attractive condition is always the first condition and the unattractive condition the second. Thus any difference between the conditions in terms of the dependent variable could be caused by the order of the conditions and not the independent variable itself.

There is a solution to the problem of order effects, however, that can be used in many situations. It is counterbalancing , which means testing different participants in different orders. For example, some participants would be tested in the attractive defendant condition followed by the unattractive defendant condition, and others would be tested in the unattractive condition followed by the attractive condition. With three conditions, there would be six different orders (ABC, ACB, BAC, BCA, CAB, and CBA), so some participants would be tested in each of the six orders. With counterbalancing, participants are assigned to orders randomly, using the techniques we have already discussed. Thus random assignment plays an important role in within-subjects designs just as in between-subjects designs. Here, instead of randomly assigning to conditions, they are randomly assigned to different orders of conditions. In fact, it can safely be said that if a study does not involve random assignment in one form or another, it is not an experiment.

There are two ways to think about what counterbalancing accomplishes. One is that it controls the order of conditions so that it is no longer a confounding variable. Instead of the attractive condition always being first and the unattractive condition always being second, the attractive condition comes first for some participants and second for others. Likewise, the unattractive condition comes first for some participants and second for others. Thus any overall difference in the dependent variable between the two conditions cannot have been caused by the order of conditions. A second way to think about what counterbalancing accomplishes is that if there are carryover effects, it makes it possible to detect them. One can analyze the data separately for each order to see whether it had an effect.

When 9 Is “Larger” Than 221

Researcher Michael Birnbaum has argued that the lack of context provided by between-subjects designs is often a bigger problem than the context effects created by within-subjects designs. To demonstrate this, he asked one group of participants to rate how large the number 9 was on a 1-to-10 rating scale and another group to rate how large the number 221 was on the same 1-to-10 rating scale (Birnbaum, 1999). Participants in this between-subjects design gave the number 9 a mean rating of 5.13 and the number 221 a mean rating of 3.10. In other words, they rated 9 as larger than 221! According to Birnbaum, this is because participants spontaneously compared 9 with other one-digit numbers (in which case it is relatively large) and compared 221 with other three-digit numbers (in which case it is relatively small).

Simultaneous Within-Subjects Designs

So far, we have discussed an approach to within-subjects designs in which participants are tested in one condition at a time. There is another approach, however, that is often used when participants make multiple responses in each condition. Imagine, for example, that participants judge the guilt of 10 attractive defendants and 10 unattractive defendants. Instead of having people make judgments about all 10 defendants of one type followed by all 10 defendants of the other type, the researcher could present all 20 defendants in a sequence that mixed the two types. The researcher could then compute each participant’s mean rating for each type of defendant. Or imagine an experiment designed to see whether people with social anxiety disorder remember negative adjectives (e.g., “stupid,” “incompetent”) better than positive ones (e.g., “happy,” “productive”). The researcher could have participants study a single list that includes both kinds of words and then have them try to recall as many words as possible. The researcher could then count the number of each type of word that was recalled. There are many ways to determine the order in which the stimuli are presented, but one common way is to generate a different random order for each participant.

Between-Subjects or Within-Subjects?

Almost every experiment can be conducted using either a between-subjects design or a within-subjects design. This means that researchers must choose between the two approaches based on their relative merits for the particular situation.

Between-subjects experiments have the advantage of being conceptually simpler and requiring less testing time per participant. They also avoid carryover effects without the need for counterbalancing. Within-subjects experiments have the advantage of controlling extraneous participant variables, which generally reduces noise in the data and makes it easier to detect a relationship between the independent and dependent variables.

A good rule of thumb, then, is that if it is possible to conduct a within-subjects experiment (with proper counterbalancing) in the time that is available per participant—and you have no serious concerns about carryover effects—this is probably the best option. If a within-subjects design would be difficult or impossible to carry out, then you should consider a between-subjects design instead. For example, if you were testing participants in a doctor’s waiting room or shoppers in line at a grocery store, you might not have enough time to test each participant in all conditions and therefore would opt for a between-subjects design. Or imagine you were trying to reduce people’s level of prejudice by having them interact with someone of another race. A within-subjects design with counterbalancing would require testing some participants in the treatment condition first and then in a control condition. But if the treatment works and reduces people’s level of prejudice, then they would no longer be suitable for testing in the control condition. This is true for many designs that involve a treatment meant to produce long-term change in participants’ behavior (e.g., studies testing the effectiveness of psychotherapy). Clearly, a between-subjects design would be necessary here.

Remember also that using one type of design does not preclude using the other type in a different study. There is no reason that a researcher could not use both a between-subjects design and a within-subjects design to answer the same research question. In fact, professional researchers often do exactly this.

Key Takeaways

  • Experiments can be conducted using either between-subjects or within-subjects designs. Deciding which to use in a particular situation requires careful consideration of the pros and cons of each approach.
  • Random assignment to conditions in between-subjects experiments or to orders of conditions in within-subjects experiments is a fundamental element of experimental research. Its purpose is to control extraneous variables so that they do not become confounding variables.
  • Experimental research on the effectiveness of a treatment requires both a treatment condition and a control condition, which can be a no-treatment control condition, a placebo control condition, or a waitlist control condition. Experimental treatments can also be compared with the best available alternative.

Discussion: For each of the following topics, list the pros and cons of a between-subjects and within-subjects design and decide which would be better.

  • You want to test the relative effectiveness of two training programs for running a marathon.
  • Using photographs of people as stimuli, you want to see if smiling people are perceived as more intelligent than people who are not smiling.
  • In a field experiment, you want to see if the way a panhandler is dressed (neatly vs. sloppily) affects whether or not passersby give him any money.
  • You want to see if concrete nouns (e.g., dog ) are recalled better than abstract nouns (e.g., truth ).
  • Discussion: Imagine that an experiment shows that participants who receive psychodynamic therapy for a dog phobia improve more than participants in a no-treatment control group. Explain a fundamental problem with this research design and at least two ways that it might be corrected.

Birnbaum, M. H. (1999). How to show that 9 > 221: Collect judgments in a between-subjects design. Psychological Methods, 4 , 243–249.

Moseley, J. B., O’Malley, K., Petersen, N. J., Menke, T. J., Brody, B. A., Kuykendall, D. H., … Wray, N. P. (2002). A controlled trial of arthroscopic surgery for osteoarthritis of the knee. The New England Journal of Medicine, 347 , 81–88.

Price, D. D., Finniss, D. G., & Benedetti, F. (2008). A comprehensive review of the placebo effect: Recent advances and current thought. Annual Review of Psychology, 59 , 565–590.

Shapiro, A. K., & Shapiro, E. (1999). The powerful placebo: From ancient priest to modern physician . Baltimore, MD: Johns Hopkins University Press.

  • Research Methods in Psychology. Provided by : University of Minnesota Libraries Publishing. Located at : http://open.lib.umn.edu/psychologyresearchmethods . License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike

Footer Logo Lumen Candela

Privacy Policy

  • Yale Directories

Institution for Social and Policy Studies

Advancing research • shaping policy • developing leaders, why randomize.

About Randomized Field Experiments Randomized field experiments allow researchers to scientifically measure the impact of an intervention on a particular outcome of interest.

What is a randomized field experiment? In a randomized experiment, a study sample is divided into one group that will receive the intervention being studied (the treatment group) and another group that will not receive the intervention (the control group). For instance, a study sample might consist of all registered voters in a particular city. This sample will then be randomly divided into treatment and control groups. Perhaps 40% of the sample will be on a campaign’s Get-Out-the-Vote (GOTV) mailing list and the other 60% of the sample will not receive the GOTV mailings. The outcome measured –voter turnout– can then be compared in the two groups. The difference in turnout will reflect the effectiveness of the intervention.

What does random assignment mean? The key to randomized experimental research design is in the random assignment of study subjects – for example, individual voters, precincts, media markets or some other group – into treatment or control groups. Randomization has a very specific meaning in this context. It does not refer to haphazard or casual choosing of some and not others. Randomization in this context means that care is taken to ensure that no pattern exists between the assignment of subjects into groups and any characteristics of those subjects. Every subject is as likely as any other to be assigned to the treatment (or control) group. Randomization is generally achieved by employing a computer program containing a random number generator. Randomization procedures differ based upon the research design of the experiment. Individuals or groups may be randomly assigned to treatment or control groups. Some research designs stratify subjects by geographic, demographic or other factors prior to random assignment in order to maximize the statistical power of the estimated effect of the treatment (e.g., GOTV intervention). Information about the randomization procedure is included in each experiment summary on the site.

What are the advantages of randomized experimental designs? Randomized experimental design yields the most accurate analysis of the effect of an intervention (e.g., a voter mobilization phone drive or a visit from a GOTV canvasser, on voter behavior). By randomly assigning subjects to be in the group that receives the treatment or to be in the control group, researchers can measure the effect of the mobilization method regardless of other factors that may make some people or groups more likely to participate in the political process. To provide a simple example, say we are testing the effectiveness of a voter education program on high school seniors. If we allow students from the class to volunteer to participate in the program, and we then compare the volunteers’ voting behavior against those who did not participate, our results will reflect something other than the effects of the voter education intervention. This is because there are, no doubt, qualities about those volunteers that make them different from students who do not volunteer. And, most important for our work, those differences may very well correlate with propensity to vote. Instead of letting students self-select, or even letting teachers select students (as teachers may have biases in who they choose), we could randomly assign all students in a given class to be in either a treatment or control group. This would ensure that those in the treatment and control groups differ solely due to chance. The value of randomization may also be seen in the use of walk lists for door-to-door canvassers. If canvassers choose which houses they will go to and which they will skip, they may choose houses that seem more inviting or they may choose houses that are placed closely together rather than those that are more spread out. These differences could conceivably correlate with voter turnout. Or if house numbers are chosen by selecting those on the first half of a ten page list, they may be clustered in neighborhoods that differ in important ways from neighborhoods in the second half of the list. Random assignment controls for both known and unknown variables that can creep in with other selection processes to confound analyses. Randomized experimental design is a powerful tool for drawing valid inferences about cause and effect. The use of randomized experimental design should allow a degree of certainty that the research findings cited in studies that employ this methodology reflect the effects of the interventions being measured and not some other underlying variable or variables.

Random Assignment in Psychology (Definition + 40 Examples)

practical psychology logo

Have you ever wondered how researchers discover new ways to help people learn, make decisions, or overcome challenges? A hidden hero in this adventure of discovery is a method called random assignment, a cornerstone in psychological research that helps scientists uncover the truths about the human mind and behavior.

Random Assignment is a process used in research where each participant has an equal chance of being placed in any group within the study. This technique is essential in experiments as it helps to eliminate biases, ensuring that the different groups being compared are similar in all important aspects.

By doing so, researchers can be confident that any differences observed are likely due to the variable being tested, rather than other factors.

In this article, we’ll explore the intriguing world of random assignment, diving into its history, principles, real-world examples, and the impact it has had on the field of psychology.

History of Random Assignment

two women in different conditions

Stepping back in time, we delve into the origins of random assignment, which finds its roots in the early 20th century.

The pioneering mind behind this innovative technique was Sir Ronald A. Fisher , a British statistician and biologist. Fisher introduced the concept of random assignment in the 1920s, aiming to improve the quality and reliability of experimental research .

His contributions laid the groundwork for the method's evolution and its widespread adoption in various fields, particularly in psychology.

Fisher’s groundbreaking work on random assignment was motivated by his desire to control for confounding variables – those pesky factors that could muddy the waters of research findings.

By assigning participants to different groups purely by chance, he realized that the influence of these confounding variables could be minimized, paving the way for more accurate and trustworthy results.

Early Studies Utilizing Random Assignment

Following Fisher's initial development, random assignment started to gain traction in the research community. Early studies adopting this methodology focused on a variety of topics, from agriculture (which was Fisher’s primary field of interest) to medicine and psychology.

The approach allowed researchers to draw stronger conclusions from their experiments, bolstering the development of new theories and practices.

One notable early study utilizing random assignment was conducted in the field of educational psychology. Researchers were keen to understand the impact of different teaching methods on student outcomes.

By randomly assigning students to various instructional approaches, they were able to isolate the effects of the teaching methods, leading to valuable insights and recommendations for educators.

Evolution of the Methodology

As the decades rolled on, random assignment continued to evolve and adapt to the changing landscape of research.

Advances in technology introduced new tools and techniques for implementing randomization, such as computerized random number generators, which offered greater precision and ease of use.

The application of random assignment expanded beyond the confines of the laboratory, finding its way into field studies and large-scale surveys.

Researchers across diverse disciplines embraced the methodology, recognizing its potential to enhance the validity of their findings and contribute to the advancement of knowledge.

From its humble beginnings in the early 20th century to its widespread use today, random assignment has proven to be a cornerstone of scientific inquiry.

Its development and evolution have played a pivotal role in shaping the landscape of psychological research, driving discoveries that have improved lives and deepened our understanding of the human experience.

Principles of Random Assignment

Delving into the heart of random assignment, we uncover the theories and principles that form its foundation.

The method is steeped in the basics of probability theory and statistical inference, ensuring that each participant has an equal chance of being placed in any group, thus fostering fair and unbiased results.

Basic Principles of Random Assignment

Understanding the core principles of random assignment is key to grasping its significance in research. There are three principles: equal probability of selection, reduction of bias, and ensuring representativeness.

The first principle, equal probability of selection , ensures that every participant has an identical chance of being assigned to any group in the study. This randomness is crucial as it mitigates the risk of bias and establishes a level playing field.

The second principle focuses on the reduction of bias . Random assignment acts as a safeguard, ensuring that the groups being compared are alike in all essential aspects before the experiment begins.

This similarity between groups allows researchers to attribute any differences observed in the outcomes directly to the independent variable being studied.

Lastly, ensuring representativeness is a vital principle. When participants are assigned randomly, the resulting groups are more likely to be representative of the larger population.

This characteristic is crucial for the generalizability of the study’s findings, allowing researchers to apply their insights broadly.

Theoretical Foundation

The theoretical foundation of random assignment lies in probability theory and statistical inference .

Probability theory deals with the likelihood of different outcomes, providing a mathematical framework for analyzing random phenomena. In the context of random assignment, it helps in ensuring that each participant has an equal chance of being placed in any group.

Statistical inference, on the other hand, allows researchers to draw conclusions about a population based on a sample of data drawn from that population. It is the mechanism through which the results of a study can be generalized to a broader context.

Random assignment enhances the reliability of statistical inferences by reducing biases and ensuring that the sample is representative.

Differentiating Random Assignment from Random Selection

It’s essential to distinguish between random assignment and random selection, as the two terms, while related, have distinct meanings in the realm of research.

Random assignment refers to how participants are placed into different groups in an experiment, aiming to control for confounding variables and help determine causes.

In contrast, random selection pertains to how individuals are chosen to participate in a study. This method is used to ensure that the sample of participants is representative of the larger population, which is vital for the external validity of the research.

While both methods are rooted in randomness and probability, they serve different purposes in the research process.

Understanding the theories, principles, and distinctions of random assignment illuminates its pivotal role in psychological research.

This method, anchored in probability theory and statistical inference, serves as a beacon of reliability, guiding researchers in their quest for knowledge and ensuring that their findings stand the test of validity and applicability.

Methodology of Random Assignment

woman sleeping with a brain monitor

Implementing random assignment in a study is a meticulous process that involves several crucial steps.

The initial step is participant selection, where individuals are chosen to partake in the study. This stage is critical to ensure that the pool of participants is diverse and representative of the population the study aims to generalize to.

Once the pool of participants has been established, the actual assignment process begins. In this step, each participant is allocated randomly to one of the groups in the study.

Researchers use various tools, such as random number generators or computerized methods, to ensure that this assignment is genuinely random and free from biases.

Monitoring and adjusting form the final step in the implementation of random assignment. Researchers need to continuously observe the groups to ensure that they remain comparable in all essential aspects throughout the study.

If any significant discrepancies arise, adjustments might be necessary to maintain the study’s integrity and validity.

Tools and Techniques Used

The evolution of technology has introduced a variety of tools and techniques to facilitate random assignment.

Random number generators, both manual and computerized, are commonly used to assign participants to different groups. These generators ensure that each individual has an equal chance of being placed in any group, upholding the principle of equal probability of selection.

In addition to random number generators, researchers often use specialized computer software designed for statistical analysis and experimental design.

These software programs offer advanced features that allow for precise and efficient random assignment, minimizing the risk of human error and enhancing the study’s reliability.

Ethical Considerations

The implementation of random assignment is not devoid of ethical considerations. Informed consent is a fundamental ethical principle that researchers must uphold.

Informed consent means that every participant should be fully informed about the nature of the study, the procedures involved, and any potential risks or benefits, ensuring that they voluntarily agree to participate.

Beyond informed consent, researchers must conduct a thorough risk and benefit analysis. The potential benefits of the study should outweigh any risks or harms to the participants.

Safeguarding the well-being of participants is paramount, and any study employing random assignment must adhere to established ethical guidelines and standards.

Conclusion of Methodology

The methodology of random assignment, while seemingly straightforward, is a multifaceted process that demands precision, fairness, and ethical integrity. From participant selection to assignment and monitoring, each step is crucial to ensure the validity of the study’s findings.

The tools and techniques employed, coupled with a steadfast commitment to ethical principles, underscore the significance of random assignment as a cornerstone of robust psychological research.

Benefits of Random Assignment in Psychological Research

The impact and importance of random assignment in psychological research cannot be overstated. It is fundamental for ensuring the study is accurate, allowing the researchers to determine if their study actually caused the results they saw, and making sure the findings can be applied to the real world.

Facilitating Causal Inferences

When participants are randomly assigned to different groups, researchers can be more confident that the observed effects are due to the independent variable being changed, and not other factors.

This ability to determine the cause is called causal inference .

This confidence allows for the drawing of causal relationships, which are foundational for theory development and application in psychology.

Ensuring Internal Validity

One of the foremost impacts of random assignment is its ability to enhance the internal validity of an experiment.

Internal validity refers to the extent to which a researcher can assert that changes in the dependent variable are solely due to manipulations of the independent variable , and not due to confounding variables.

By ensuring that each participant has an equal chance of being in any condition of the experiment, random assignment helps control for participant characteristics that could otherwise complicate the results.

Enhancing Generalizability

Beyond internal validity, random assignment also plays a crucial role in enhancing the generalizability of research findings.

When done correctly, it ensures that the sample groups are representative of the larger population, so can allow researchers to apply their findings more broadly.

This representative nature is essential for the practical application of research, impacting policy, interventions, and psychological therapies.

Limitations of Random Assignment

Potential for implementation issues.

While the principles of random assignment are robust, the method can face implementation issues.

One of the most common problems is logistical constraints. Some studies, due to their nature or the specific population being studied, find it challenging to implement random assignment effectively.

For instance, in educational settings, logistical issues such as class schedules and school policies might stop the random allocation of students to different teaching methods .

Ethical Dilemmas

Random assignment, while methodologically sound, can also present ethical dilemmas.

In some cases, withholding a potentially beneficial treatment from one of the groups of participants can raise serious ethical questions, especially in medical or clinical research where participants' well-being might be directly affected.

Researchers must navigate these ethical waters carefully, balancing the pursuit of knowledge with the well-being of participants.

Generalizability Concerns

Even when implemented correctly, random assignment does not always guarantee generalizable results.

The types of people in the participant pool, the specific context of the study, and the nature of the variables being studied can all influence the extent to which the findings can be applied to the broader population.

Researchers must be cautious in making broad generalizations from studies, even those employing strict random assignment.

Practical and Real-World Limitations

In the real world, many variables cannot be manipulated for ethical or practical reasons, limiting the applicability of random assignment.

For instance, researchers cannot randomly assign individuals to different levels of intelligence, socioeconomic status, or cultural backgrounds.

This limitation necessitates the use of other research designs, such as correlational or observational studies , when exploring relationships involving such variables.

Response to Critiques

In response to these critiques, people in favor of random assignment argue that the method, despite its limitations, remains one of the most reliable ways to establish cause and effect in experimental research.

They acknowledge the challenges and ethical considerations but emphasize the rigorous frameworks in place to address them.

The ongoing discussion around the limitations and critiques of random assignment contributes to the evolution of the method, making sure it is continuously relevant and applicable in psychological research.

While random assignment is a powerful tool in experimental research, it is not without its critiques and limitations. Implementation issues, ethical dilemmas, generalizability concerns, and real-world limitations can pose significant challenges.

However, the continued discourse and refinement around these issues underline the method's enduring significance in the pursuit of knowledge in psychology.

By being careful with how we do things and doing what's right, random assignment stays a really important part of studying how people act and think.

Real-World Applications and Examples

man on a treadmill

Random assignment has been employed in many studies across various fields of psychology, leading to significant discoveries and advancements.

Here are some real-world applications and examples illustrating the diversity and impact of this method:

  • Medicine and Health Psychology: Randomized Controlled Trials (RCTs) are the gold standard in medical research. In these studies, participants are randomly assigned to either the treatment or control group to test the efficacy of new medications or interventions.
  • Educational Psychology: Studies in this field have used random assignment to explore the effects of different teaching methods, classroom environments, and educational technologies on student learning and outcomes.
  • Cognitive Psychology: Researchers have employed random assignment to investigate various aspects of human cognition, including memory, attention, and problem-solving, leading to a deeper understanding of how the mind works.
  • Social Psychology: Random assignment has been instrumental in studying social phenomena, such as conformity, aggression, and prosocial behavior, shedding light on the intricate dynamics of human interaction.

Let's get into some specific examples. You'll need to know one term though, and that is "control group." A control group is a set of participants in a study who do not receive the treatment or intervention being tested , serving as a baseline to compare with the group that does, in order to assess the effectiveness of the treatment.

  • Smoking Cessation Study: Researchers used random assignment to put participants into two groups. One group received a new anti-smoking program, while the other did not. This helped determine if the program was effective in helping people quit smoking.
  • Math Tutoring Program: A study on students used random assignment to place them into two groups. One group received additional math tutoring, while the other continued with regular classes, to see if the extra help improved their grades.
  • Exercise and Mental Health: Adults were randomly assigned to either an exercise group or a control group to study the impact of physical activity on mental health and mood.
  • Diet and Weight Loss: A study randomly assigned participants to different diet plans to compare their effectiveness in promoting weight loss and improving health markers.
  • Sleep and Learning: Researchers randomly assigned students to either a sleep extension group or a regular sleep group to study the impact of sleep on learning and memory.
  • Classroom Seating Arrangement: Teachers used random assignment to place students in different seating arrangements to examine the effect on focus and academic performance.
  • Music and Productivity: Employees were randomly assigned to listen to music or work in silence to investigate the effect of music on workplace productivity.
  • Medication for ADHD: Children with ADHD were randomly assigned to receive either medication, behavioral therapy, or a placebo to compare treatment effectiveness.
  • Mindfulness Meditation for Stress: Adults were randomly assigned to a mindfulness meditation group or a waitlist control group to study the impact on stress levels.
  • Video Games and Aggression: A study randomly assigned participants to play either violent or non-violent video games and then measured their aggression levels.
  • Online Learning Platforms: Students were randomly assigned to use different online learning platforms to evaluate their effectiveness in enhancing learning outcomes.
  • Hand Sanitizers in Schools: Schools were randomly assigned to use hand sanitizers or not to study the impact on student illness and absenteeism.
  • Caffeine and Alertness: Participants were randomly assigned to consume caffeinated or decaffeinated beverages to measure the effects on alertness and cognitive performance.
  • Green Spaces and Well-being: Neighborhoods were randomly assigned to receive green space interventions to study the impact on residents’ well-being and community connections.
  • Pet Therapy for Hospital Patients: Patients were randomly assigned to receive pet therapy or standard care to assess the impact on recovery and mood.
  • Yoga for Chronic Pain: Individuals with chronic pain were randomly assigned to a yoga intervention group or a control group to study the effect on pain levels and quality of life.
  • Flu Vaccines Effectiveness: Different groups of people were randomly assigned to receive either the flu vaccine or a placebo to determine the vaccine’s effectiveness.
  • Reading Strategies for Dyslexia: Children with dyslexia were randomly assigned to different reading intervention strategies to compare their effectiveness.
  • Physical Environment and Creativity: Participants were randomly assigned to different room setups to study the impact of physical environment on creative thinking.
  • Laughter Therapy for Depression: Individuals with depression were randomly assigned to laughter therapy sessions or control groups to assess the impact on mood.
  • Financial Incentives for Exercise: Participants were randomly assigned to receive financial incentives for exercising to study the impact on physical activity levels.
  • Art Therapy for Anxiety: Individuals with anxiety were randomly assigned to art therapy sessions or a waitlist control group to measure the effect on anxiety levels.
  • Natural Light in Offices: Employees were randomly assigned to workspaces with natural or artificial light to study the impact on productivity and job satisfaction.
  • School Start Times and Academic Performance: Schools were randomly assigned different start times to study the effect on student academic performance and well-being.
  • Horticulture Therapy for Seniors: Older adults were randomly assigned to participate in horticulture therapy or traditional activities to study the impact on cognitive function and life satisfaction.
  • Hydration and Cognitive Function: Participants were randomly assigned to different hydration levels to measure the impact on cognitive function and alertness.
  • Intergenerational Programs: Seniors and young people were randomly assigned to intergenerational programs to study the effects on well-being and cross-generational understanding.
  • Therapeutic Horseback Riding for Autism: Children with autism were randomly assigned to therapeutic horseback riding or traditional therapy to study the impact on social communication skills.
  • Active Commuting and Health: Employees were randomly assigned to active commuting (cycling, walking) or passive commuting to study the effect on physical health.
  • Mindful Eating for Weight Management: Individuals were randomly assigned to mindful eating workshops or control groups to study the impact on weight management and eating habits.
  • Noise Levels and Learning: Students were randomly assigned to classrooms with different noise levels to study the effect on learning and concentration.
  • Bilingual Education Methods: Schools were randomly assigned different bilingual education methods to compare their effectiveness in language acquisition.
  • Outdoor Play and Child Development: Children were randomly assigned to different amounts of outdoor playtime to study the impact on physical and cognitive development.
  • Social Media Detox: Participants were randomly assigned to a social media detox or regular usage to study the impact on mental health and well-being.
  • Therapeutic Writing for Trauma Survivors: Individuals who experienced trauma were randomly assigned to therapeutic writing sessions or control groups to study the impact on psychological well-being.
  • Mentoring Programs for At-risk Youth: At-risk youth were randomly assigned to mentoring programs or control groups to assess the impact on academic achievement and behavior.
  • Dance Therapy for Parkinson’s Disease: Individuals with Parkinson’s disease were randomly assigned to dance therapy or traditional exercise to study the effect on motor function and quality of life.
  • Aquaponics in Schools: Schools were randomly assigned to implement aquaponics programs to study the impact on student engagement and environmental awareness.
  • Virtual Reality for Phobia Treatment: Individuals with phobias were randomly assigned to virtual reality exposure therapy or traditional therapy to compare effectiveness.
  • Gardening and Mental Health: Participants were randomly assigned to engage in gardening or other leisure activities to study the impact on mental health and stress reduction.

Each of these studies exemplifies how random assignment is utilized in various fields and settings, shedding light on the multitude of ways it can be applied to glean valuable insights and knowledge.

Real-world Impact of Random Assignment

old lady gardening

Random assignment is like a key tool in the world of learning about people's minds and behaviors. It’s super important and helps in many different areas of our everyday lives. It helps make better rules, creates new ways to help people, and is used in lots of different fields.

Health and Medicine

In health and medicine, random assignment has helped doctors and scientists make lots of discoveries. It’s a big part of tests that help create new medicines and treatments.

By putting people into different groups by chance, scientists can really see if a medicine works.

This has led to new ways to help people with all sorts of health problems, like diabetes, heart disease, and mental health issues like depression and anxiety.

Schools and education have also learned a lot from random assignment. Researchers have used it to look at different ways of teaching, what kind of classrooms are best, and how technology can help learning.

This knowledge has helped make better school rules, develop what we learn in school, and find the best ways to teach students of all ages and backgrounds.

Workplace and Organizational Behavior

Random assignment helps us understand how people act at work and what makes a workplace good or bad.

Studies have looked at different kinds of workplaces, how bosses should act, and how teams should be put together. This has helped companies make better rules and create places to work that are helpful and make people happy.

Environmental and Social Changes

Random assignment is also used to see how changes in the community and environment affect people. Studies have looked at community projects, changes to the environment, and social programs to see how they help or hurt people’s well-being.

This has led to better community projects, efforts to protect the environment, and programs to help people in society.

Technology and Human Interaction

In our world where technology is always changing, studies with random assignment help us see how tech like social media, virtual reality, and online stuff affect how we act and feel.

This has helped make better and safer technology and rules about using it so that everyone can benefit.

The effects of random assignment go far and wide, way beyond just a science lab. It helps us understand lots of different things, leads to new and improved ways to do things, and really makes a difference in the world around us.

From making healthcare and schools better to creating positive changes in communities and the environment, the real-world impact of random assignment shows just how important it is in helping us learn and make the world a better place.

So, what have we learned? Random assignment is like a super tool in learning about how people think and act. It's like a detective helping us find clues and solve mysteries in many parts of our lives.

From creating new medicines to helping kids learn better in school, and from making workplaces happier to protecting the environment, it’s got a big job!

This method isn’t just something scientists use in labs; it reaches out and touches our everyday lives. It helps make positive changes and teaches us valuable lessons.

Whether we are talking about technology, health, education, or the environment, random assignment is there, working behind the scenes, making things better and safer for all of us.

In the end, the simple act of putting people into groups by chance helps us make big discoveries and improvements. It’s like throwing a small stone into a pond and watching the ripples spread out far and wide.

Thanks to random assignment, we are always learning, growing, and finding new ways to make our world a happier and healthier place for everyone!

Related posts:

  • 19+ Experimental Design Examples (Methods + Types)
  • Cluster Sampling vs Stratified Sampling
  • 41+ White Collar Job Examples (Salary + Path)
  • 47+ Blue Collar Job Examples (Salary + Path)
  • McDonaldization of Society (Definition + Examples)

Reference this article:

About The Author

Photo of author

Free Personality Test

Free Personality Quiz

Free Memory Test

Free Memory Test

Free IQ Test

Free IQ Test

PracticalPie.com is a participant in the Amazon Associates Program. As an Amazon Associate we earn from qualifying purchases.

Follow Us On:

Youtube Facebook Instagram X/Twitter

Psychology Resources

Developmental

Personality

Relationships

Psychologists

Serial Killers

Psychology Tests

Personality Quiz

Memory Test

Depression test

Type A/B Personality Test

© PracticalPsychology. All rights reserved

Privacy Policy | Terms of Use

Purpose and Limitations of Random Assignment

In an experimental study, random assignment is a process by which participants are assigned, with the same chance, to either a treatment or a control group. The goal is to assure an unbiased assignment of participants to treatment options.

Random assignment is considered the gold standard for achieving comparability across study groups, and therefore is the best method for inferring a causal relationship between a treatment (or intervention or risk factor) and an outcome.

Representation of random assignment in an experimental study

Random assignment of participants produces comparable groups regarding the participants’ initial characteristics, thereby any difference detected in the end between the treatment and the control group will be due to the effect of the treatment alone.

How does random assignment produce comparable groups?

1. random assignment prevents selection bias.

Randomization works by removing the researcher’s and the participant’s influence on the treatment allocation. So the allocation can no longer be biased since it is done at random, i.e. in a non-predictable way.

This is in contrast with the real world, where for example, the sickest people are more likely to receive the treatment.

2. Random assignment prevents confounding

A confounding variable is one that is associated with both the intervention and the outcome, and thus can affect the outcome in 2 ways:

Causal diagram representing how confounding works

Either directly:

Direct influence of confounding on the outcome

Or indirectly through the treatment:

Indirect influence of confounding on the outcome

This indirect relationship between the confounding variable and the outcome can cause the treatment to appear to have an influence on the outcome while in reality the treatment is just a mediator of that effect (as it happens to be on the causal pathway between the confounder and the outcome).

Random assignment eliminates the influence of the confounding variables on the treatment since it distributes them at random between the study groups, therefore, ruling out this alternative path or explanation of the outcome.

How random assignment protects from confounding

3. Random assignment also eliminates other threats to internal validity

By distributing all threats (known and unknown) at random between study groups, participants in both the treatment and the control group become equally subject to the effect of any threat to validity. Therefore, comparing the outcome between the 2 groups will bypass the effect of these threats and will only reflect the effect of the treatment on the outcome.

These threats include:

  • History: This is any event that co-occurs with the treatment and can affect the outcome.
  • Maturation: This is the effect of time on the study participants (e.g. participants becoming wiser, hungrier, or more stressed with time) which might influence the outcome.
  • Regression to the mean: This happens when the participants’ outcome score is exceptionally good on a pre-treatment measurement, so the post-treatment measurement scores will naturally regress toward the mean — in simple terms, regression happens since an exceptional performance is hard to maintain. This effect can bias the study since it represents an alternative explanation of the outcome.

Note that randomization does not prevent these effects from happening, it just allows us to control them by reducing their risk of being associated with the treatment.

What if random assignment produced unequal groups?

Question: What should you do if after randomly assigning participants, it turned out that the 2 groups still differ in participants’ characteristics? More precisely, what if randomization accidentally did not balance risk factors that can be alternative explanations between the 2 groups? (For example, if one group includes more male participants, or sicker, or older people than the other group).

Short answer: This is perfectly normal, since randomization only assures an unbiased assignment of participants to groups, i.e. it produces comparable groups, but it does not guarantee the equality of these groups.

A more complete answer: Randomization will not and cannot create 2 equal groups regarding each and every characteristic. This is because when dealing with randomization there is still an element of luck. If you want 2 perfectly equal groups, you better match them manually as is done in a matched pairs design (for more information see my article on matched pairs design ).

This is similar to throwing a die: If you throw it 10 times, the chance of getting a specific outcome will not be 1/6. But it will approach 1/6 if you repeat the experiment a very large number of times and calculate the average number of times the specific outcome turned up.

So randomization will not produce perfectly equal groups for each specific study, especially if the study has a small sample size. But do not forget that scientific evidence is a long and continuous process, and the groups will tend to be equal in the long run when a meta-analysis aggregates the results of a large number of randomized studies.

So for each individual study, differences between the treatment and control group will exist and will influence the study results. This means that the results of a randomized trial will sometimes be wrong, and this is absolutely okay.

BOTTOM LINE:

Although the results of a particular randomized study are unbiased, they will still be affected by a sampling error due to chance. But the real benefit of random assignment will be when data is aggregated in a meta-analysis.

Limitations of random assignment

Randomized designs can suffer from:

1. Ethical issues:

Randomization is ethical only if the researcher has no evidence that one treatment is superior to the other.

Also, it would be unethical to randomly assign participants to harmful exposures such as smoking or dangerous chemicals.

2. Low external validity:

With random assignment, external validity (i.e. the generalizability of the study results) is compromised because the results of a study that uses random assignment represent what would happen under “ideal” experimental conditions, which is in general very different from what happens at the population level.

In the real world, people who take the treatment might be very different from those who don’t – so the assignment of participants is not a random event, but rather under the influence of all sort of external factors.

External validity can be also jeopardized in cases where not all participants are eligible or willing to accept the terms of the study.

3. Higher cost of implementation:

An experimental design with random assignment is typically more expensive than observational studies where the investigator’s role is just to observe events without intervening.

Experimental designs also typically take a lot of time to implement, and therefore are less practical when a quick answer is needed.

4. Impracticality when answering non-causal questions:

A randomized trial is our best bet when the question is to find the causal effect of a treatment or a risk factor.

Sometimes however, the researcher is just interested in predicting the probability of an event or a disease given some risk factors. In this case, the causal relationship between these variables is not important, making observational designs more suitable for such problems.

5. Impracticality when studying the effect of variables that cannot be manipulated:

The usual objective of studying the effects of risk factors is to propose recommendations that involve changing the level of exposure to these factors.

However, some risk factors cannot be manipulated, and so it does not make any sense to study them in a randomized trial. For example it would be impossible to randomly assign participants to age categories, gender, or genetic factors.

6. Difficulty to control participants:

These difficulties include:

  • Participants refusing to receive the assigned treatment.
  • Participants not adhering to recommendations.
  • Differential loss to follow-up between those who receive the treatment and those who don’t.

All of these issues might occur in a randomized trial, but might not affect an observational study.

  • Shadish WR, Cook TD, Campbell DT. Experimental and Quasi-Experimental Designs for Generalized Causal Inference . 2nd edition. Cengage Learning; 2001.
  • Friedman LM, Furberg CD, DeMets DL, Reboussin DM, Granger CB. Fundamentals of Clinical Trials . 5th ed. 2015 edition. Springer; 2015.

Further reading

  • Posttest-Only Control Group Design
  • Pretest-Posttest Control Group Design
  • Randomized Block Design

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Random Assignment in Experiments | Introduction & Examples

Random Assignment in Experiments | Introduction & Examples

Published on 6 May 2022 by Pritha Bhandari . Revised on 13 February 2023.

In experimental research, random assignment is a way of placing participants from your sample into different treatment groups using randomisation.

With simple random assignment, every member of the sample has a known or equal chance of being placed in a control group or an experimental group. Studies that use simple random assignment are also called completely randomised designs .

Random assignment is a key part of experimental design . It helps you ensure that all groups are comparable at the start of a study: any differences between them are due to random factors.

Table of contents

Why does random assignment matter, random sampling vs random assignment, how do you use random assignment, when is random assignment not used, frequently asked questions about random assignment.

Random assignment is an important part of control in experimental research, because it helps strengthen the internal validity of an experiment.

In experiments, researchers manipulate an independent variable to assess its effect on a dependent variable, while controlling for other variables. To do so, they often use different levels of an independent variable for different groups of participants.

This is called a between-groups or independent measures design.

You use three groups of participants that are each given a different level of the independent variable:

  • A control group that’s given a placebo (no dosage)
  • An experimental group that’s given a low dosage
  • A second experimental group that’s given a high dosage

Random assignment to helps you make sure that the treatment groups don’t differ in systematic or biased ways at the start of the experiment.

If you don’t use random assignment, you may not be able to rule out alternative explanations for your results.

  • Participants recruited from pubs are placed in the control group
  • Participants recruited from local community centres are placed in the low-dosage experimental group
  • Participants recruited from gyms are placed in the high-dosage group

With this type of assignment, it’s hard to tell whether the participant characteristics are the same across all groups at the start of the study. Gym users may tend to engage in more healthy behaviours than people who frequent pubs or community centres, and this would introduce a healthy user bias in your study.

Although random assignment helps even out baseline differences between groups, it doesn’t always make them completely equivalent. There may still be extraneous variables that differ between groups, and there will always be some group differences that arise from chance.

Most of the time, the random variation between groups is low, and, therefore, it’s acceptable for further analysis. This is especially true when you have a large sample. In general, you should always use random assignment in experiments when it is ethically possible and makes sense for your study topic.

Prevent plagiarism, run a free check.

Random sampling and random assignment are both important concepts in research, but it’s important to understand the difference between them.

Random sampling (also called probability sampling or random selection) is a way of selecting members of a population to be included in your study. In contrast, random assignment is a way of sorting the sample participants into control and experimental groups.

While random sampling is used in many types of studies, random assignment is only used in between-subjects experimental designs.

Some studies use both random sampling and random assignment, while others use only one or the other.

Random sample vs random assignment

Random sampling enhances the external validity or generalisability of your results, because it helps to ensure that your sample is unbiased and representative of the whole population. This allows you to make stronger statistical inferences .

You use a simple random sample to collect data. Because you have access to the whole population (all employees), you can assign all 8,000 employees a number and use a random number generator to select 300 employees. These 300 employees are your full sample.

Random assignment enhances the internal validity of the study, because it ensures that there are no systematic differences between the participants in each group. This helps you conclude that the outcomes can be attributed to the independent variable .

  • A control group that receives no intervention
  • An experimental group that has a remote team-building intervention every week for a month

You use random assignment to place participants into the control or experimental group. To do so, you take your list of participants and assign each participant a number. Again, you use a random number generator to place each participant in one of the two groups.

To use simple random assignment, you start by giving every member of the sample a unique number. Then, you can use computer programs or manual methods to randomly assign each participant to a group.

  • Random number generator: Use a computer program to generate random numbers from the list for each group.
  • Lottery method: Place all numbers individually into a hat or a bucket, and draw numbers at random for each group.
  • Flip a coin: When you only have two groups, for each number on the list, flip a coin to decide if they’ll be in the control or the experimental group.
  • Use a dice: When you have three groups, for each number on the list, roll a die to decide which of the groups they will be in. For example, assume that rolling 1 or 2 lands them in a control group; 3 or 4 in an experimental group; and 5 or 6 in a second control or experimental group.

This type of random assignment is the most powerful method of placing participants in conditions, because each individual has an equal chance of being placed in any one of your treatment groups.

Random assignment in block designs

In more complicated experimental designs, random assignment is only used after participants are grouped into blocks based on some characteristic (e.g., test score or demographic variable). These groupings mean that you need a larger sample to achieve high statistical power .

For example, a randomised block design involves placing participants into blocks based on a shared characteristic (e.g., college students vs graduates), and then using random assignment within each block to assign participants to every treatment condition. This helps you assess whether the characteristic affects the outcomes of your treatment.

In an experimental matched design , you use blocking and then match up individual participants from each block based on specific characteristics. Within each matched pair or group, you randomly assign each participant to one of the conditions in the experiment and compare their outcomes.

Sometimes, it’s not relevant or ethical to use simple random assignment, so groups are assigned in a different way.

When comparing different groups

Sometimes, differences between participants are the main focus of a study, for example, when comparing children and adults or people with and without health conditions. Participants are not randomly assigned to different groups, but instead assigned based on their characteristics.

In this type of study, the characteristic of interest (e.g., gender) is an independent variable, and the groups differ based on the different levels (e.g., men, women). All participants are tested the same way, and then their group-level outcomes are compared.

When it’s not ethically permissible

When studying unhealthy or dangerous behaviours, it’s not possible to use random assignment. For example, if you’re studying heavy drinkers and social drinkers, it’s unethical to randomly assign participants to one of the two groups and ask them to drink large amounts of alcohol for your experiment.

When you can’t assign participants to groups, you can also conduct a quasi-experimental study . In a quasi-experiment, you study the outcomes of pre-existing groups who receive treatments that you may not have any control over (e.g., heavy drinkers and social drinkers).

These groups aren’t randomly assigned, but may be considered comparable when some other variables (e.g., age or socioeconomic status) are controlled for.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomisation. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.

In contrast, random assignment is a way of sorting the sample into control and experimental groups.

Random sampling enhances the external validity or generalisability of your results, while random assignment improves the internal validity of your study.

Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.

In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.

To implement random assignment , assign a unique number to every member of your study’s sample .

Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a die to randomly assign participants to groups.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bhandari, P. (2023, February 13). Random Assignment in Experiments | Introduction & Examples. Scribbr. Retrieved 21 May 2024, from https://www.scribbr.co.uk/research-methods/random-assignment-experiments/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, a quick guide to experimental design | 5 steps & examples, controlled experiments | methods & examples of control, control groups and treatment groups | uses & examples.

5.5 – Importance of randomization in experimental design

Introduction.

  • Demonstrate the benefits of random sampling as a method to control for extraneous factors

What about observational studies? How does randomization work?

Chapter 5 contents.

If the goal of the research is to make general, evidenced-based statements about causes of disease or other conditions of concern to the researcher, then how the subjects are selected for study directly impacts our ability to make generalizable conclusions . The most important concept to learn about inference in statistical science is that your sample of subjects upon which all measurements and treatments are conducted, ideally should be a random selection of individuals from a well-defined reference population.

The primary benefit of random sampling is that it strengthens our confidence in the links between cause and effect. Often after an intervention trial is complete, differences among the treatment groups will be observed. Groups of subjects who participated in sixteen weeks of “vigorous” aerobic exercise training show reduced systolic blood pressure compared to those subjects who engaged in light exercise for the same period of time (Cox et al 1996). But how do we know that exercise training caused the difference in blood pressure between the two treatment groups? Couldn’t the differences be explained by chance differences in the subjects? Age, body mass index (BMI), over all health, family history, etc.?

How can we account for these additional differences among the subjects? If you are thinking like an experimental biologist, then the word “control” is likely coming to the foreground. Why not design a study in which all 60 subjects are the same age, the same BMI, the same general health, the same family … history…? Hmm. That does not work. Even if you decide to control age, BMI, and general health categories, you can imagine the increased effort and cost to the project in trying to recruit subjects based on such narrow criteria. So, control per se is not the general answer.

If done properly, random sampling makes these alternative explanations less likely. Random sampling implies that other factors that may causally contribute to differences in the measured outcome, but themselves are not measured or included as a focus of the research study, should be the same, on average, among our different treatment groups. The practical benefits of proper random sampling is that recruiting subjects gets easier — fewer subjects will be needed because you are not trying to control dozens of factors that may (or may not!) contribute to differences in your outcome variable. The downside to random sampling is that the variability of the outcomes within your treatment groups will tends to increase. As we will see when we get to statistical inference, large variability within groups will make it less likely that any statistical difference between the treatment groups will be observed.

Demonstrate the benefits of random sampling as a method to control for extraneous factors.

The study reported by Cox et al included 60 obese men between the ages of 20 and 50. A reasonable experimental design decision would suggest that the 60 subjects be split into the two treatment groups such that both groups had 30 subjects for a balanced design. Subjects who met all of the research criteria and who had signed the informed consent agreement are to be placed into the treatment groups and there are many ways that group assignment could be accomplished. One possibility, the researchers could assign the first 30 people that came into the lab to the Vigorous exercise group and the remaining 30 then would be assigned to the Light exercise group. Intuitively I think we would all agree that this is a suspect way to design an experiment, but more importantly, why shouldn’t you use this convenient method?

Just for arguments sake, imagine that their subjects came in one at a time, and, coincidentally, they did so by age. The first person was age 21, the second was 22, and so on up to the 30th person who was 50. Then, the next group came in, again, coincidentally in order of ascending age. If you calculate the simple average age for each group you will find that they are identical (35.5 years). On the surface, this looks like we have controlled for age: both treatment groups have subjects that are the same age. A second option is to sort the subjects into the two treatment groups so that a 21 year old is in Group A, and the other 21 year old is in Group B, and so on. Again, the average age of Group A subjects and of Group B subjects would be the same and therefore controlled with respect to any covariation between age and change in blood pressure. However, there are other variables that may covary with blood pressure, and by controlling one, we would need to control the others. Randomization provides a better way.

I will demonstrate how randomization tends to distribute the values in such a way that the groups will not differ appreciably for the nuisance variables like age and BMI differences and, by extension, any other covariable. The R work is attached following the Reading list. The take-home message: After randomly selecting subjects for assignment to the treatment groups, the apparent differences between Group A and Group B for both age and BMI are substantially diminished. No attempt to match by age and by BMI is necessary. The numbers are shown in the table and then in two graphics (Fig. 1, Fig. 2) derived from the table.

Table 1. Mean age and BMI for subjects in two treatment groups A and B where subjects were assigned randomly or by convenience to treatment groups.

Just for emphasis, the means from Table 1 are presented in the next two figures (Fig. 1 and Fig. 2).

Figure 6. Age of subjects by groups (A = blue, B = red) with and without randomized assignment of subjects to treatment groups

Figure 1. Age of subjects by groups (A = blue, B = red) with and without randomized assignment of subjects to treatment groups

Figure 7. BMI of subjects by groups (A = blue, B = red) with and without randomized assignment of subjects to treatment groups

Figure 2. BMI of subjects by groups (A = blue, B = red) with and without randomized assignment of subjects to treatment groups

Note that the apparent difference between A and B for BMI disappear once proper randomization of subjects was accomplished. In conclusion, a random sample is an approach to experimental design that helps to reduce the influence other factors may have on the outcome variable (e.g., change in blood pressure after 16 weeks of exercise). In principle, randomization should protect a project because, on average, these influences will be represented randomly for the two groups of individuals. This reasoning extends to unmeasured and unknown causal factors as well.

This discussion was illustrated by random assignment of subjects to treatment groups. The same logic applies to how to select subjects from a population. If the sampling is large enough, then a random sample of subjects will tend to be representative of the variability of the outcome variable for the population and representative also of the additional and unmeasured cofactors that may contribute to the variability of the outcome variable.

However, if you do cannot obtain a random sample, then conclusions reached may be sample-specific, biased . …perhaps the group of individuals that likes to exercise on treadmills just happens to have a higher cardiac output because they are larger than the individuals that like to exercise on bicycles. This nonrandom sample will bias your results and can lead to incorrect interpretation of results. Random sampling is CRUCIAL in epidemiology, opinion survey work, most aspects of health, drug studies, medical work with human subjects. It’s difficult and very costly to do… so most surveys you hear about, especially polls reported from Internet sites, are NOT conducted using random sampling (included in the catch-all term “ probability sampling “)!! As an aside, most opinion survey work involves complex sample designs involving some form of geographic clustering (e.g., all phone numbers in a city, random sample among neighborhoods).

Random sampling is the ideal if generalizations are to be made about data, but strictly random sampling is not appropriate for all kinds of studies. Consider the question of whether or not EMF exposure is a risk factor for developing cancer (Pool 1990). These kinds of studies are observational: at least in principle, we wouldn’t expect that housing and therefore exposure to EMF is manipulated (cf. discussion Walker 2009). Thus, epidemiologists will look for patterns: if EMF exposure is linked to cancer, then more cases of cancer should occur near EMF sources compared to areas distant from EMF sources. Thus, the hypothesis is that an association between EMF exposure and cancer occurs non-randomly, whereas cancers occurring in people not exposed to EMF are random. Unfortunately, clusters can occur even if the process that generates the data is random.

Compare Graph A and Graph B (Fig. 3). One of the graphs resulted from a random process and the other was generated by a non-random process . Note that the claim can be rephrased about the probability that each grid has a point, e.g., it’s like Heads/Tails of 16 tosses of a coin. We can see clusters of points in Graph B; Graph A lacks obvious clusters of points — there is a point in each of the 16 cells of the grid. Although both patterns could be random, the correct answer in this case is Graph B.

Figure 8. An example of clustering resulting from a random sampling process (Graph B). In contrast, Graph A was generated so that a point was located within each grid.

Figure 3. An example of clustering resulting from a random sampling process (Graph B). In contrast, Graph A was generated so that a point was located within each grid.

The graphic below shows the transmission grid in the continental United States (Fig. 4). How would one design a random sampling scheme overlaid against the obviously heterogeneous distribution of the grid itself? If a random sample was drawn, chances are good that no population would be near a grid in many of the western states, but in contrast, the likelihood would increase in the eastern portion of the United States where the population and therefore transmission grid is more densely placed.

Open Infrastructure map, https://openinframap.org/#3/24.61/-101.16

Figure 4. Map of electrical transmission grid for continental United States of America. Image source https://openinframap.org/#3/24.61/-101.16

For example, you want to test whether or not EMF affects human health, and your particular interest is in whether or not there exists a relationship between living close to high voltage towers or transfer stations and brain cancer. How does one design a study, keeping in mind the importance of randomization for our ability to generalize and assign causation?  This is a part of epidemiology which strives to detect whether clusters of disease are related to some environmental source. It is an extremely difficult challenge. For the record, no clear link to EMF and cancer has been found, but reports do appear from time to time (e.g., report on a cluster of breast cancer in men working in office adjacent to high EMF, Milham 2004).

1. I claimed that Graph B in Figure 8 was generated by a random process while Graph B was not. The results are: Graph A, each cell in the grid has a point; In graph B, ten cells have at least one point, six cells are empty. Which probability _____ distribution applies? A. beta B. binomial C. normal D. poisson

2. True or False. If sample with replacement is used, a subject may be included more than once.

3. Use the sample() with and without replacement on the object (see help with R below)

a) set of 3

b) set of 4

4. Confirm the claim by calculating the probability of Graph A result vs Graph B result (see R script below).

Code you type is shown in red; responses or output from R are shown in blue. Recall that statements preceded by the hash # are comments and are not read by R (i.e., no need for you tp type them).

First, create some variables. Vectors aa and bb contain my two age sequences.

Second, append vector bb to the end of vector aa

Third, get the average age for the first group (the aa sequence)  and for the second group (the bb sequence). Lots of ways to do this, I made a two subsets from the combined age variable; could have just as easily taken the mean of aa and the mean of bb (same thing!).

Fourth, start building a data frame, then sort it by age. Will be adding additional variables to this data frame

Fifth, divide the variable again into two subsets of 30 and get the averages

Sixth, create an index variable, random order without replacement

Add the new variable to our existing data frame, then print it to check that all is well

Seventh, select for our first treatment group the first 30 subjects from the randomized index. There are again other ways to do this, but sorting on the index variable means that the subject order will be change too.

Print the new data frame to confirm that the sorting worked. It did. we can see that the rows have been sorted by ascending order based on the index variable.

Eighth, create our new treatment groups, again of n = 30 each, then get the means ages for each group.

Get the minimum and maximum values for the groups

Ninth, create a BMI variable drawn from a normal distribution with coefficient of variation equal to 20%. The first group with we will call cc

The second group called dd

Create a new variable called BMI by joining cc and dd

Add the BMI variable to our data frame.

Tenth, repeat our protocol from before: Set up two groups each with 30 subjects, calculate the means for the variables and then sort by the random index and get the new group means.

All we did was confirm that the unsorted groups had mean BMI of around 27.5 and 37.5 respectively. Now, proceed to sort by the random index variable. Go ahead and create a new data frame

Get the means of the new groups

That’s all of the work!

  • The basics explained
  • Experiments
  • Experimental and Sampling units
  • Replication, Bias, and Nuisance Variables
  • Clinical trials
  • Importance of randomization in experimental design
  • Sampling from Populations
  • References and suggested readings

The role of randomization in clinical trials

  • PMID: 7187102
  • DOI: 10.1002/sim.4780010412

Random assignment of treatments is an essential feature of experimental design in general and clinical trials in particular. It provides broad comparability of treatment groups and validates the use of statistical methods for the analysis of results. Various devices are available for improving the balance of prognostic factors across treatment groups. Several recent initiatives to diminish the role of randomization are seen as being potentially misleading. Randomization is entirely compatible with medical ethics in circumstances when the treatment of choice is not clearly identified.

Publication types

  • Comparative Study
  • Clinical Trials as Topic*
  • Ethics, Medical*
  • Random Allocation*
  • Research Design*

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 22 May 2024

The association between hematologic traits and aneurysm-related subarachnoid hemorrhage: a two-sample mendelian randomization study

  • Kang Peng 1 , 3 ,
  • Abraham Ayodeji Adegboro 2 , 3 ,
  • Yanwen Li 2 , 3 ,
  • Hongwei Liu 2 , 3 ,
  • Biao Xiong 4 &
  • Xuejun Li 2 , 3  

Scientific Reports volume  14 , Article number:  11694 ( 2024 ) Cite this article

Metrics details

  • Neuroscience

Several hematologic traits have been suggested to potentially contribute to the formation and rupture of intracranial aneurysms (IA). The purpose of this study is to explore the causal association between hematologic traits and the risk of IA. To explore the causal association between hematologic traits and the risk of IA, we employed two-sample Mendelian randomization (MR) analysis. Two independent summary-level GWAS data were used for preliminary and replicated MR analyses. The inverse variance weighted (IVW) method was employed as the primary method in the MR analyses. The stabilities of the results were further confirmed by a meta-analysis. In the preliminary MR analysis, hematocrit, hemoglobin concentration ( p  = 0.0047), basophil count ( p  = 0.0219) had a suggestive inverse causal relationship with the risk of aneurysm-associated subarachnoid hemorrhage (aSAH). The monocyte percentage of white cells ( p  = 0.00956) was suggestively positively causally correlated with the risk of aSAH. In the replicated MR analysis, only the monocyte percentage of white cells ( p  = 0.00297) remained consistent with the MR results in the preliminary analysis. The hematocrit, hemoglobin concentration, and basophil count no longer showed significant causal relationship ( p  > 0.05). Meta-analysis results further confirmed that only the MR result of monocyte percentage of white cells reached significance in the random effect model and fixed effect model. None of the 25 hematologic traits was causally associated with the risk of unruptured intracranial aneurysms (uIA). This study revealed a suggestive positive association between the monocyte percentage of white cells and the risk of aSAH. This finding contributes to a better understanding that monocytes/macrophages could participate in the risk of aSAH.

Similar content being viewed by others

the purpose of randomization or random assignment

A two-sample Mendelian randomization analysis of modifiable risk factors and intracranial aneurysms

the purpose of randomization or random assignment

Genome-wide association study of intracranial aneurysms identifies 17 risk loci and genetic overlap with clinical risk factors

the purpose of randomization or random assignment

Illuminating the potential causality of serum level of matrix metalloproteinases and the occurrence of cardiovascular and cerebrovascular diseases: a Mendelian randomization study

Introduction.

Aneurysm-associated subarachnoid hemorrhage (aSAH) is a fetal subtype of stroke that can cause death in 25–30% of patients within 3 months of onset and result in permanent neurological dysfunction in approximately 40% of patients 1 , 2 , 3 . Therefore, it is clinically important to determine the risk factors for the formation and rupture of intracranial aneurysms (IA).

Several modified risk factors, including smoking, high body mass index (BMI), elevated triglycerides level (TG), hypertension, heavy alcohol consumption, sleep apnea, and low levels of low-density lipoprotein (LDL), have been suggested to be associated with an increased risk of IA and aSAH 4 , 5 , 6 , 7 . Additionally, elevated serum magnesium concentration has been reported to be associated with a reduced risk of IA and aSAH 8 . However, the risk factors contributing to IA formation and aSAH are not yet thoroughly understood.

Blood components play a crucial role in maintaining oxygen transport, hemostasis, immune response, and other physiological activities 9 , 10 , 11 , 12 , and abnormalities in blood components have been confirmed to be associated with various diseases 13 . Recently, a Mendelian randomization (MR) study demonstrated a causal association between increased plateletcrit and eosinophil percentage of white cells and ischemic stroke 14 . Observational studies have also shown associations between abnormalities in blood components and the risk of IA formation, aSAH, and patient prognosis after aSAH 15 , 16 , 17 , 18 , 19 , 20 . However, the determination of whether blood components abnormalities are the cause or consequence of IA formation and aSAH is challenging due to possible residual confounding and reverse causality inherent in observational studies 21 .

Mendelian Randomization (MR) is a method that utilizes exposure-associated genetic variants as instrumental variables (IVs) to study causal associations between exposure factors and outcomes 22 , 23 . The advantage of using genetic variants is that it mitigates potential residual confounding and reverse causality present in observational studies. The availability of genome-wide association studies (GWAS) has provided an opportunity to explore the causal relationship between hematologic traits and IA formation and aSAH 13 , 24 . Therefore, the objective of our study was to examine whether hematologic traits are causally associated with the risk of IA formation and its rupture using a two-sample MR analysis.

GWAS summary-level data of hematologic traits

We obtained summary-level data of hematologic traits from a previous GWAS study, which included 173,480 participants without any blood disorders from the European population 13 . Twenty-five hematologic traits were analyzed, including 11 red blood cell (RBC) traits (hematocrit, hemoglobin concentration, high light scatter reticulocyte (HLSR) counts, high light scatter reticulocyte percentage of red cells (HLSR/RBC), immature fraction of reticulocytes (IFR), mean corpuscular hemoglobin (MCH), MCH concentration, mean corpuscular volume (MCV), RBC counts, reticulocyte count, and reticulocyte fraction of red cells), 10 white blood cell traits (white blood cell counts, basophil counts, eosinophil counts, lymphocyte counts, monocyte counts, neutrophil counts, basophil percentage of white cells, eosinophil percentage of white cells, neutrophil percentage of white cells, and monocyte percentage of white cells), and four platelet traits (platelet count, platelet distribution width (PDW), plateletcrit and mean platelet volume (MPV)). Single-nucleotide polymorphisms (SNPs) associated with these traits below the genome-wide significance threshold ( p  < 5 × 10 –8 ) were selected as candidate instrumental variables (IVs). To ensure independence among the candidate IVs, we performed linkage disequilibrium clumping (r 2  = 0.001, kb = 10,000), and excluded dependent candidate IVs. We harmonized the candidate IVs with the outcome data to ensure consistent effects of each SNP on the exposure and outcome. Additionally, we calculated the F-statistic for each SNP using the Formula:

where n  is the sample size and R 2 is the proportion of exposure variance divided by genetic variance. And we removed those SNPs with an F-statistic less than 10 to ensure sufficient instrumental strength for the exposure 25 .

GWAS summary-level data of aSAH and uIA

For the preliminary MR analysis, we obtained the summary-level data of aSAH and unruptured intracranial aneurysm (uIA) from a previous GWAS study with European population samples, comprising 5140 aSAH cases, 2070 uIA cases, 71,934 controls, and 4,471,083 SNPs 24 .

For the replicated MR analysis, we obtained another independent summary-level data of aSAH from the FinnGen cohort ( https://www.finngen.fi/en ), increasing the total sample size to 377,277 cases, including 5753 aSAH cases, and 20,175,454 SNPs 26 .

Mendelian randomization and sensitivity analyses

In this MR study, we hypothesized that the genetic variants serving as IVs were strongly associated with hematologic traits, not confounded by other factors, and solely related to the risk of aSAH or uIA through hematologic traits.

All analyses were performed using the RStudio (version 4.2.1) with the “Two Sample MR”, “Mendelian Randomization Pleiotropy Residual Sum and Outlier (MR-PRESSO)” packages. To assess the causal relationship between hematologic traits and the risk of aSAH or uIA, we conducted two-sample MR analysis using the inverse variance regression weighted (IVW) method, a widely used approach in MR analysis 27 . First, we conducted the preliminary MR analysis to identify exposure traits potentially associated with aSAH or uIA. Subsequently, we validated the causal association between the filtered exposure traits and aSAH using the FinnGen cohort.

We considered suggestive statistical significance when 0.05 >  p  > 0.002, and statistical significance was considered when p  < 0.002 (0.05/25) after Bonferroni correction.

For sensitivity analysis, we assessed heterogeneity in the MR analysis using Cochran's Q test. We used both fixed effects of IVW method and MR-Egger regression to examine the causal estimates and p-value . If heterogeneity was detected ( p  < 0.05), then MR-PRESSO was performed to detect and correct for potential outliers due to horizontal pleiotropy 28 . We also evaluated directional pleiotropy of the IVs through MR-Egger regression analysis, where the intercept of MR-Egger regression indicated the presence or absence of directional pleiotropy 29 . Furthermore, we conducted leave-one-out sensitivity analysis to assess the stability of the MR results. This analysis involved systematically excluding each SNP individually, followed by performing MR analysis on the remaining SNPs to detect potential outliers and ensure the stability of results 30 .

Informed consent and Ethical approval

Participants were informed, and relevant ethical approval was obtained for the original studies.

Causal effects of hematologic traits on aSAH and uIA in the preliminary MR analysis

In the preliminary MR analysis, we investigated the causal relationship between 25 hematologic traits and the occurrence of aSAH or uIA using the outcome summary statistics from the study conducted by Bakker et al. 24 . We initially discovered that hematocrit (IVW: OR = 0.77, 95%CI: 0.63–0.94, p  = 0.0108), hemoglobin concentration (IVW: OR = 0.76, 95%CI: 0.62–0.92, p  = 0.0047), RBC count (IVW: OR = 0.86, 95%CI: 0.74–1.00, p  = 0.0433), basophil count (IVW: OR = 0.71, 95%CI: 0.53–0.95, p  = 0.0219), eosinophil percentage of white cells (IVW: OR = 1.23, 95%CI: 1.03–1.47, p  = 0.0247), and monocyte percentage of white cells (IVW: OR = 1.21, 95%CI: 1.05–1.39, p  = 0.00956) were suggestively causally associated with the risk of aSAH (Fig.  1 A). However, the remaining 19 hematologic traits were not associated with aSAH (Fig.  1 A). And there was no association between the 25 hematologic traits and uIA (Fig.  1 B). We also used other methods to evaluate whether there were causal relationships between hematocrit, hemoglobin concentration, RBC count, basophil count, eosinophil percentage of white cells, monocyte percentage of white cells and the risk of aSAH, but there was no significance observed ( Supplementary materials: Table S1 ).

figure 1

Forest plot illustrating the causal effects of 25 hematologic traits on aSAH ( A ) and uIA ( B ) in the preliminary MR analysis. The IVW method was used as the primary analysis approach. All results are presented as odds ratios (OR) along with their corresponding 95% confidence intervals (95%CI). #: counts; %: percentage of white cells.

Next, we assessed the heterogeneity and directional pleiotropy in the preliminary MR analysis, and found no evidence of heterogeneity or directional pleiotropy (Supplementary materials: Table S2 and Table S3 ). Subsequently, a leave-one-out analysis was conducted to evaluate the stability of the six hematologic traits that showed potential causal associations with the risk of aSAH. As depicted in Fig.  2 , none of the SNPs significantly affected the stability of the MR results for hematocrit, hemoglobin concentration, basophil count, and monocyte percentage of white cells in relation to aSAH. However, the MR results for RBC count (Fig.  2 E) and eosinophil percentage of white cells (Fig.  2 F) were found to be unstable. Consequently, RBC count and eosinophil percentage of white cells were excluded, and the four remaining hematologic traits (hematocrit, hemoglobin concentration, basophil count, and monocyte percentage of white cells) were selected for further replicated MR analysis and meta-analysis.

figure 2

leave-one-out analysis of the MR results of hematocrit (HCT) ( A ), hemoglobin concentration (HB) ( B ), basophil counts ( C ), monocytes percentage of white cells ( D ), red blood cell counts ( E ), and eosinophil percentage of white cells ( F ) on aSAH in the preliminary MR analysis.

Next, we plotted the scatter plots of each SNP of hematocrit, hemoglobin concentration, basophil count, and monocyte percentage of white cells on aSAH. The scatter plots showed that hematocrit (Fig.  3 A), hemoglobin concentration (Fig.  3 B), and basophil count (Fig.  3 C) were negatively correlated to aSAH, and monocyte percentage of white cells (Fig.  3 D) was positively correlated to aSAH.

figure 3

Scatter plot of the MR results of hematocrit ( A ), hemoglobin concentration ( B ), basophil count ( C ), and monocyte percentage of white cells ( D ) on the risk of aSAH in the preliminary MR analysis.

Replicated MR analysis validates causal association between monocyte percentage of white cells and risk of aSAH

To ensure the reliability of this MR study, we collected another independent GWAS summary-level data of aSAH from the FinnGen cohort and performed MR analyses to examine the relationship between the remaining four hematologic traits and the risk of aSAH. In the replicated MR analysis, we observed that only the monocyte percentage of white cells remained significantly associated with the risk of aSAH (IVW: OR = 1.20, 95%CI: 1.06–1.36, p  = 0.00297), hematocrit (IVW: OR = 0.93, 95%CI: 0.76–1.14, p  = 0.499), hemoglobin concentration (IVW: OR = 0.95, 95%CI: 0.79–1.16, p  = 0.626), and basophil count (IVW: OR = 1.20, 95%CI: 0.90–1.59, p  = 0.215) were not causally associated with the risk of aSAH (Fig.  4 ). We also used other methods to evaluate whether there were causal relationships between hematocrit, hemoglobin concentration, basophil count, monocyte percentage of white cells and the risk of aSAH, there still was no significance observed ( Supplementary materials: Table S4 ). No heterogeneity or directional pleiotropy was observed in the replicated MR analyses ( Supplementary materials: Table S5 ). The leave-one-out analysis indicated that none of the individual IVs caused instability in the MR results for the association between monocyte percentage of white cells and aSAH (Fig.  5 A). The scatter plots of each SNP of monocyte percentage of white cells also showed a positive correlation with aSAH in the replicated MR analysis (Fig.  5 B).

figure 4

Forest plot illustrating the causal effects of hematocrit, hemoglobin concentration, basophil counts, and monocytes percentage of white cells on aSAH in the replicated MR analysis. The IVW method was used as the primary analysis approach. All results are presented as OR along with their corresponding 95%CI. #: counts; %: percentage of white cells.

figure 5

Leave-one-out analysis ( A ) and scatter plot ( B ) of the MR results of monocytes percentage of white cells on aSAH in the replicated MR analysis.

Meta-analysis of the preliminary and replicated MR analysis results further confirmed the causal association between monocyte percentage of white cells and risk of aSAH

We employed both random effect model and fixed effect model for the meta-analysis. As shown on Fig.  6 , hematocrit was not causally associated with the risk of aSAH in the random effect model (OR = 0.85, 95%CI: 0.70–1.02, p  = 0.085), whereas it was causally associated with the risk of aSAH in the fixed effect model (OR = 0.85, 95%CI: 0.74–0.98, p  = 0.022); hemoglobin concentration was not causally associated with the risk of aSAH in the random effect model (OR = 0.85, 95%CI: 0.68–1.06, p  = 0.158), but it was causally associated with the risk of aSAH in the fixed effect model (OR = 0.85, 95%CI: 0.74–0.98, p  = 0.024); basophil count was not causally associated with the risk of aSAH in both models (random effect model: OR = 0.92, 95%CI: 0.55–1.54, p  = 0.757; fixed effect model: OR = 0.93, 95%CI: 0.76–1.14, p  = 0.477); monocyte percentage of white cells was causally associated with the risk of aSAH in both models (random effect model: OR = 1.21, 95%CI: 1.10–1.32, p  < 0.001; fixed effect model: OR = 1.21, 95%CI: 1.10–1.32, p  < 0.001).

figure 6

Forest plot of the meta-analysis of the preliminary and replicated MR analysis results. Both random effect method and fixed effect model were used in the meta-analysis. All results are presented as OR along with their corresponding 95%CI. #: counts; %: percentage of white cells.

In this study, we performed two-sample MR analyses to investigate the causal relationship between 25 genetically determined hematologic traits and aSAH or uIA in the European population. The preliminary MR analysis revealed suggestive causal associations between hematocrit, hemoglobin concentration, basophil count, and monocyte percentage of white cells with the risk of aSAH, while the remaining 21 hematologic traits showed no causal association. None of the 25 hematologic traits showed a causal association with uIA.

In the replicated MR analysis, we focused on the four hematologic traits that exhibited suggestive causal associations with aSAH in the preliminary analysis. However, only the monocyte percentage of white cells demonstrated a consistent causal association with the risk of aSAH. The previously observed causal associations with hematocrit, hemoglobin concentration, and basophil count did not reach significance in this independent dataset. Furthermore, only the monocyte percentage of white cells reached significance in the meta-analysis using both random effect method and fixed effect modelling.

Hematocrit and hemoglobin concentration are commonly used clinical indicators to assess anemia in patients 31 , 32 . A previous clinical trial demonstrated the association between anemia following aSAH and poor long-term neurological function and patient mortality 16 . Preoperative anemia has also been identified as an independent risk factor for perioperative complications and prolonged hospital stay in patients with IA undergoing surgical intervention 33 . Lower hemoglobin levels have been linked to an increased risk of acute epilepsy after aSAH 34 . Furthermore, patients with sickle cell anemia have a higher incidence of aSAH compared to the general population, and these individuals exhibit varying degrees of anemia 35 .

In our study, we initially observed an inverse causal association between hematocrit, hemoglobin concentration, and the risk of aSAH in the preliminary MR analysis. However, these causal associations did not persist in the replicated MR analysis. The inconsistent results could be attributed to variations in data sources, including differences in sequencing platforms and depths. Notably, a large Swedish cohort study reported that while anemia was a predictor of major bleeding events, (gastric/duodenal bleeding, any severe bleeding), it was not associated with intracranial hemorrhage 36 .

Based on our findings, we conclude that hematocrit and hemoglobin concentration are not causally associated with aSAH in this study. Further investigations leveraging the advancements in GWAS technology are warranted to better understand these relationships.

The inflammatory response has been implicated in the pathogenesis of IA and its rupture 37 , 38 . In this MR study, we aimed to explore the relationship between peripheral blood inflammatory indicators and the risk of aSAH and uIA. Our analysis revealed that only the monocyte percentage of white cells was causally associated with the risk of aSAH. Although the basophil count showed a suggestive causal association with aSAH in the preliminary MR analysis, this significance did not persist in the replicated MR analysis. Furthermore, none of the inflammatory indicators of peripheral blood were found to be causally associated with uIA.

Previous studies have demonstrated the presence of macrophages in the wall of IA, particularly in ruptured IAs in humans 12 . Macrophage infiltration has been linked to the loss of smooth muscle cells and degradation of matrix proteins in the IA wall, thereby increasing the risk of aneurysm rupture. In rodent models of IA, macrophages have also been observed in the IA wall, particularly CD68-positive macrophages 39 . Studies inhibiting monocyte chemotactic protein-1 (MCP-1), which plays a role in monocyte and eosinophil chemotaxis, have shown a marked reduction in IA occurrence and enlargement in rats 40 . Depleting macrophages using clodronate liposomes has also been shown to reduce IA formation in mice 18 . These findings collectively highlight the crucial role of monocytes/macrophages in the development and rupture of IA. Our findings in this study are consistent with previous research, supporting the notion that monocyte/macrophages play a significant role in the occurrence and rupture of IA. Although the monocyte count in peripheral blood was not causally associated with the risk of aSAH in our analysis, it further emphasizes the complex involvement of immune cells in the pathogenesis of IA.

Additionally, a recently published study demonstrated that neutrophils promote IA rupture through the formation of extracellular traps in mice 41 . Another clinical study reported higher plasma myeloperoxidase (MPO) concentration in the aneurysm compared to the femoral artery in patients with IA after endovascular coiling 42 . Moreover, the number of MPO-positive cells in IA wall was higher than in the superficial temporal artery 42 . However, in our MR analysis, both the neutrophil count and neutrophil percentage of white cells in peripheral blood were not causative factors for the formation of uIA or the risk of aSAH. These contradictory results may be due to the fact that the IVs used in this MR analysis study only represent the levels of neutrophil count or neutrophil percentage of white cells in peripheral blood and do not directly reflect the conditions in the aneurysm itself.

It is important to acknowledge several limitations in our MR analysis study. Firstly, MR analysis estimates lifelong effects rather than acute effects of exposures on outcomes. Secondly, our study focused on the European population, and further investigations are needed to explore the generalizability of these findings to other populations. Thirdly, the IVs used in this study only represented peripheral blood indicators and may not fully capture the specific characteristics of blood within the aneurysm. Differences between peripheral blood and aneurysmal blood could potentially influence the results.

Despite these limitations, our study contributes to the understanding of the role of hematologic traits and inflammatory indicators in the development and rupture of IA. Further research is needed to explore the underlying mechanisms and to validate these findings in larger and more diverse populations.

Conclusions

To our knowledge, this study represents the first MR analysis investigating the causal relationship between hematologic traits and the risk of aSAH and uIA in the European population. Our findings suggests that the monocyte percentage of white cells may be a causative factor for the risk of aSAH. This discovery enhances our understanding of the pathogenesis of IA and the associated risk of aSAH, while also offering a potential hematologic indicator for monitoring individuals with uIA to mitigate their risk of aSAH.

Data availability

Summary-level statistics of 25 hematologic traits were obtained from a GWAS study published by William et al. Summary-level statistics of aSAH and uIA were collected from a GWAS meta-analysis study published by Bakker et al., and the FinnGen ( https://www.finngen.fi/en ). All data in this study can be provided by the corresponding author Dr. Xuejun Li.

Feigin, V. L., Lawes, C. M., Bennett, D. A., Barker-Collo, S. L. & Parag, V. Worldwide stroke incidence and early case fatality reported in 56 population-based studies: A systematic review. Lancet Neurol. 8 (4), 355–369 (2009).

Article   PubMed   Google Scholar  

Galea, J. P., Dulhanty, L. & Patel, H. C. Uk, Ireland subarachnoid hemorrhage database C: Predictors of outcome in aneurysmal subarachnoid hemorrhage patients: Observations from a multicenter data set. Stroke 48 (11), 2958–2963 (2017).

Al-Khindi, T., Macdonald, R. L. & Schweizer, T. A. Cognitive and functional outcome after aneurysmal subarachnoid hemorrhage. Stroke 41 (8), e519-536 (2010).

Karhunen, V., Bakker, M. K., Ruigrok, Y. M., Gill, D. & Larsson, S. C. Modifiable risk factors for intracranial aneurysm and aneurysmal subarachnoid hemorrhage: A mendelian randomization study. J. Am. Heart Assoc. 10 (22), e022277 (2021).

Article   PubMed   PubMed Central   Google Scholar  

Larsson, S. C. et al. Genetic predisposition to smoking in relation to 14 cardiovascular diseases. Eur. Heart J. 41 (35), 3304–3310 (2020).

Feigin, V. L. et al. Risk factors for subarachnoid hemorrhage: An updated systematic review of epidemiological studies. Stroke 36 (12), 2773–2780 (2005).

Zaremba, S. et al. Increased risk for subarachnoid hemorrhage in patients with sleep apnea. J. Neurol. 266 (6), 1351–1357 (2019).

Larsson, S. C. & Gill, D. Association of serum magnesium levels with risk of intracranial aneurysm: A mendelian randomization study. Neurology 97 (4), e341–e344 (2021).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Jensen, F. B. The dual roles of red blood cells in tissue oxygen delivery: Oxygen carriers and regulators of local blood flow. J. Exp. Biol. 212 (Pt 21), 3387–3393 (2009).

Article   CAS   PubMed   Google Scholar  

Jenne, C. N., Urrutia, R. & Kubes, P. Platelets: Bridging hemostasis, inflammation, and immunity. Int. J. Lab. Hematol. 35 (3), 254–261 (2013).

Varol, C., Mildner, A. & Jung, S. Macrophages: Development and tissue specialization. Annu. Rev. Immunol. 33 , 643–675 (2015).

Kataoka, K. et al. Structural fragility and inflammatory response of ruptured cerebral aneurysms. A comparative study between ruptured and unruptured cerebral aneurysms. Stroke 30 (7), 1396–1401 (1999).

Astle, W. J. et al. The allelic landscape of human blood cell trait variation and links to common complex disease. Cell 167 (5), 1415–1429 (2016).

Harshfield, E. L., Sims, M. C., Traylor, M., Ouwehand, W. H. & Markus, H. S. The role of haematological traits in risk of ischaemic stroke and its subtypes. Brain 143 (1), 210–221 (2020).

Naidech, A. M. et al. Higher hemoglobin is associated with less cerebral infarction, poor outcome, and death after subarachnoid hemorrhage. Neurosurgery 59 (4), 775–779 (2006).

Ayling, O. G. S., Ibrahim, G. M., Alotaibi, N. M., Gooderham, P. A. & Macdonald, R. L. Anemia after aneurysmal subarachnoid hemorrhage is associated with poor outcome and death. Stroke 49 (8), 1859–1865 (2018).

Wang, J. & Cao, Y. Characteristics of circulating monocytes at baseline and after activation in patients with intracranial aneurysm. Hum. Immunol. 81 (1), 41–47 (2020).

Article   MathSciNet   CAS   PubMed   Google Scholar  

Kanematsu, Y. et al. Critical roles of macrophages in the formation of intracranial aneurysm. Stroke 42 (1), 173–178 (2011).

Soderholm, M., Zia, E., Hedblad, B. & Engstrom, G. Leukocyte count and incidence of subarachnoid haemorrhage: A prospective cohort study. BMC Neurol. 14 , 71 (2014).

McGirt, M. J. et al. Leukocytosis as an independent risk factor for cerebral vasospasm following aneurysmal subarachnoid hemorrhage. J. Neurosurg. 98 (6), 1222–1226 (2003).

Lawlor, D. A., Harbord, R. M., Sterne, J. A., Timpson, N. & Davey Smith, G. Mendelian randomization: Using genes as instruments for making causal inferences in epidemiology. Stat. Med. 27 (8), 1133–1163 (2008).

Article   MathSciNet   PubMed   Google Scholar  

Davey Smith, G. & Hemani, G. Mendelian randomization: Genetic anchors for causal inference in epidemiological studies. Hum. Mol. Genet. 23 (R1), R89-98 (2014).

Burgess, S., Foley, C. N. & Zuber, V. Inferring causal relationships between risk factors and outcomes from genome-wide association study data. Annu. Rev. Genom. Hum. Genet. 19 , 303–327 (2018).

Article   CAS   Google Scholar  

Bakker, M. K. et al. Genome-wide association study of intracranial aneurysms identifies 17 risk loci and genetic overlap with clinical risk factors. Nat. Genet. 52 (12), 1303–1313 (2020).

Burgess, S., Thompson, S. G. & Collaboration, C. C. G. Avoiding bias from weak instruments in Mendelian randomization studies. Int. J. Epidemiol. 40 (3), 755–764 (2011).

Kurki, M. I. et al. FinnGen provides genetic insights from a well-phenotyped isolated population. Nature 613 (7944), 508–518 (2023).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Burgess, S., Bowden, J., Fall, T., Ingelsson, E. & Thompson, S. G. Sensitivity analyses for robust causal inference from mendelian randomization analyses with multiple genetic variants. Epidemiology 28 (1), 30–42 (2017).

Verbanck, M., Chen, C. Y., Neale, B. & Do, R. Detection of widespread horizontal pleiotropy in causal relationships inferred from Mendelian randomization between complex traits and diseases. Nat. Genet. 50 (5), 693–698 (2018).

Burgess, S. & Thompson, S. G. Interpreting findings from Mendelian randomization using the MR-Egger method. Eur. J. Epidemiol. 32 (5), 377–389 (2017).

Hemani, G. et al. The MR-Base platform supports systematic causal inference across the human phenome. Elife 2018 , 7 (2018).

Google Scholar  

Marn, H. & Critchley, J. A. Accuracy of the WHO Haemoglobin Colour Scale for the diagnosis of anaemia in primary health care settings in low-income countries: A systematic review and meta-analysis. Lancet Glob. Health 4 (4), e251-265 (2016).

Elwood, P. C. Anaemia. Lancet 2 (7893), 1364–1365 (1974).

Seicean, A. et al. Risks associated with preoperative anemia and perioperative blood transfusion in open surgery for intracranial aneurysms. J. Neurosurg. 123 (1), 91–100 (2015).

Zheng, S. F. et al. Lower serum iron and hemoglobin levels are associated with acute seizures in patients with ruptured cerebral aneurysms. Neurocrit. Care 31 (3), 501–506 (2019).

Anson, J. A., Koshy, M., Ferguson, L. & Crowell, R. M. Subarachnoid hemorrhage in sickle-cell disease. J. Neurosurg. 75 (4), 552–558 (1991).

Friberg, L., Rosenqvist, M. & Lip, G. Y. Evaluation of risk stratification schemes for ischaemic stroke and bleeding in 182 678 patients with atrial fibrillation: The Swedish Atrial Fibrillation cohort study. Eur. Heart J. 33 (12), 1500–1510 (2012).

Hosaka, K. & Hoh, B. L. Inflammation and cerebral aneurysms. Transl. Stroke Res. 5 (2), 190–198 (2014).

Chyatte, D., Bruno, G., Desai, S. & Todor, D. R. Inflammation and intracranial aneurysms. Neurosurgery 45 (5), 1137–1146 (1999).

Aoki, T., Kataoka, H., Morimoto, M., Nozaki, K. & Hashimoto, N. Macrophage-derived matrix metalloproteinase-2 and -9 promote the progression of cerebral aneurysms in rats. Stroke 38 (1), 162–169 (2007).

Aoki, T. et al. Impact of monocyte chemoattractant protein-1 deficiency on cerebral aneurysm formation. Stroke 40 (3), 942–951 (2009).

Korai, M. et al. Neutrophil extracellular traps promote the development of intracranial aneurysm rupture. Hypertension 77 (6), 2084–2093 (2021).

Chu, Y. et al. Myeloperoxidase is increased in human cerebral aneurysms and increases formation and rupture of cerebral aneurysms in mice. Stroke 46 (6), 1651–1656 (2015).

Download references

Acknowledgements

The authors thank all participants and researchers of the UK biobank, the ISGC Intracranial Aneurysm working group, and the FinnGen study.

This work was supported by the National Natural Science Foundation of China (No. 81770781 and No. 82270825), Special funds for innovation in Hunan Province (No. 2020SK2062), and High talent project of Hunan Province (No. 2022WZ1031).

Author information

Authors and affiliations.

Department of Radiology, Xiangya Hospital, Central South University, 87 xiangya road, Changsha, Hunan, China

Department of Neurosurgery, Xiangya Hospital, Central South University, 87 xiangya road, Changsha, Hunan, China

Abraham Ayodeji Adegboro, Yanwen Li, Hongwei Liu & Xuejun Li

Hunan International Scientific and Technological Cooperation Base of Brain Tumor Research, Xiangya Hospital, Central South University, Changsha, China

Kang Peng, Abraham Ayodeji Adegboro, Yanwen Li, Hongwei Liu & Xuejun Li

Department of Neurosurgery, People’s Hospital of Wangcheng District, Changsha, 410200, Hunan, China

You can also search for this author in PubMed   Google Scholar

Contributions

KP collected the data and performed the analyses. KP, AA, YL, and HL wrote the manuscript. BX and XL conceived the original idea and supervised the project. All authors reviewed and approved the final version of the manuscript.

Corresponding authors

Correspondence to Biao Xiong or Xuejun Li .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary tables., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Peng, K., Adegboro, A.A., Li, Y. et al. The association between hematologic traits and aneurysm-related subarachnoid hemorrhage: a two-sample mendelian randomization study. Sci Rep 14 , 11694 (2024). https://doi.org/10.1038/s41598-024-62761-1

Download citation

Received : 16 August 2023

Accepted : 21 May 2024

Published : 22 May 2024

DOI : https://doi.org/10.1038/s41598-024-62761-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Hematologic traits
  • Intracranial aneurysm
  • Mendelian randomization
  • Subarachnoid hemorrhage

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

the purpose of randomization or random assignment

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Korean J Anesthesiol
  • v.72(3); 2019 Jun

Randomization in clinical studies

Chi-yeon lim.

1 Department of Biostatistics, Dongguk University College of Medicine, Goyang, Korea

2 Department of Anesthesiology and Pain Medicine, Dongguk University Ilsan Hospital, Goyang, Korea

Randomized controlled trial is widely accepted as the best design for evaluating the efficacy of a new treatment because of the advantages of randomization (random allocation). Randomization eliminates accidental bias, including selection bias, and provides a base for allowing the use of probability theory. Despite its importance, randomization has not been properly understood. This article introduces the different randomization methods with examples: simple randomization; block randomization; adaptive randomization, including minimization; and response-adaptive randomization. Ethics related to randomization are also discussed. The study is helpful in understanding the basic concepts of randomization and how to use R software.

Introduction

Statistical inference in clinical trials is a mandatory process to verify the efficacy and safety of drugs, medical devices, and procedures. It allows for generalizing the results observed through sample, so the sample by random sampling is very important. A randomized controlled trial (RCT) comparing the effects among study groups carry out to avoid any bias at the stage of the planning a study protocol. Randomization (or random allocation of subjects) can mitigate these biases with its randomness, which implies no rule or predictability for allocating subjects to treatment and control groups.

Another property of randomization is that it promotes comparability of the study groups and serves as a basis for statistical inference for quantitative evaluation of the treatment effect. Randomization can be used to create similarity of groups. In other words, all factors, whether known or unknown, that may affect the outcome can be similarly distributed among groups. This similarity is very important and allows for statistical inferences on the treatment effects. Also, it ensures that other factors except treatment do not affect the outcome. If the outcomes of the treatment group and control group show differences, this will be the only difference between the groups, leading to the conclusion that the difference is treatment induced [ 1 ].

CONSORT 1 ) , a set of guidelines proposed to improve completeness of the clinical study report, also includes randomization. Randomization plays a crucial role in increasing the quality of evidence-based studies by minimizing the selection bias that could affect the outcomes. In general, randomization places programming for random number generation, random allocation concealment for security, and a separate random code manager. After then, the generated randomization is implemented to the study [ 2 ]. Randomization is based on probability theory and hence difficult to understand. Moreover, its reproducibility problem requires the use of computer programming language. This study tries to alleviate these difficulties by enabling even a non-statistician to understand randomization for a comparative RCT design.

Methods of Randomization

The method of randomization applied must be determined at the planning stage of a study. “Randomness” cannot be predicted because it involves no rule, constraint, or characteristic. Randomization can minimize the predictability of which treatment will be performed. The method described here is called simple randomization (or complete randomization). However, the absence of rules, constraints, or characteristics does not completely eliminate imbalances by chance. For example, assume that in a multicenter study, all subjects are randomly allocated to treatment or control groups. If subjects from center A are mainly allocated to the control group and lots of subjects from center B are allocated to the treatment group, even though this is allocated with simple randomization, can we ignore the imbalance of the randomization rate in each center?

For another example, if the majority of subjects in the control group were recruited early in the study and/or the majority of those in the treatment group were recruited later in the study, can the chronological bias be ignored? The imbalance in simple randomization is often resolved through restrictive randomization, which is a slightly restricted method [ 3 , 4 ]. Furthermore, adaptive randomization can change the allocation of subjects to reflect the prognostic factors or the response to therapy during the study. The use of adaptive randomization has been increasing in recent times, but simple or restrictive randomization continues to be widely used [ 4 ]. In the Appendix , the R commands are prepared for the various randomization methods described below.

Simple randomization

In simple randomization, a coin or a die roll, for example, may be used to allocate subjects to a group. The best part of simple randomization is that it minimizes any bias by eliminating predictability. Furthermore, each subject can maintain complete randomness and independence with regard to the treatment administered [ 5 ]. This method is easy to understand and apply, 2 ) but it cannot prevent the imbalances in the sample size or prognostic factors that are likely to occur as the number of subjects participating in the study decreases. If the ratio of number of subjects shows an imbalance, that is, it is not 1 : 1, even with the same number of subjects participating, the power of the study will fall. In a study involving a total of 40 subjects in two groups, if 20 subjects are allocated to each group, the power is 80%; this will be 77% for a 25/15 subject allocation and 67% for a 30/10 subject allocation ( Fig. 1 ). 3 ) In addition, it would be difficult to consider a 25/15 or 30/10 subject allocation as aesthetically balanced. 4 ) In other words, the balancing of subjects seems plausible to both researchers and readers. Unfortunately, the nature of simple randomization rarely lets the number of subjects in both groups to be equal [ 6 ]. Therefore, if it is not out of the range of the assignment ratio (e.g., 45%–55%), 5 ) it is balanced. As the total number of subjects increases, the probability of departing from the assignment ratio, that is, the probability of imbalance, decreases. In the following, the total number of subjects and the probability of imbalance were examined in the two-group study with an assignment ratio of 45%–55% ( Fig. 2 ). If the total number of subjects is 40, the probability of the imbalance is 52.7% ( Fig. 2 , point A), but this decreases to 15.7% for 200 subjects ( Fig. 2 , point B) and 4.6% for 400 subjects ( Fig. 2 , point C). This is the randomization method recommended for large-scale clinical trials, because the likelihood of imbalance in trials with a small number of subjects is high [ 6 – 8 ]. 6 ) However, as the number of subjects does not always increase, other solutions need to be considered. A block randomization is helpful to resolve the imbalance in number of subjects, while a stratified randomization and an adaptive randomization can help resolve the imbalance in prognostic factors.

An external file that holds a picture, illustration, etc.
Object name is kja-19049f1.jpg

Influence of sample size ratio in two groups on power (difference (d) = 0.9, two-tailed, significant level = 0.05). The dashed line indicates the same sample size in two groups (n = 20) and maximized power.

An external file that holds a picture, illustration, etc.
Object name is kja-19049f2.jpg

Probability curves of imbalance between two groups for complete randomization as a function of total sample size (n). When n = 40, there is a 52.7% chance of imbalance beyond 10% (allocation ratio 45%–55%) (point A). When n = 200, there is a 15.7% chance of imbalance (point B), but n = 400 results in only 4.6% chance of imbalance (point C).

Block randomization

If we consider only the balance in number of subjects in a study involving two treatment groups A and B, then A and B can be repeatedly allocated in a randomized block design with predefined block size. Here, a selection bias is inevitable because a researcher or subject can easily predict the allocation of the group. For a small number of subjects, their number in the treatment groups will not remain the same as the study progresses, and the statistical analysis may show the problem of poor power. To avoid this, we set blocks for randomization and balance the number of subjects in each block. 7 ) When using blocks, we need to apply multiple blocks and randomize within each block. At the end of block randomization, the number of subjects can easily be balanced, and the maximum imbalance in the study can be limited to an appropriate level. That is, block randomization has the advantage of increasing the comparability between groups by keeping the ratio of the number of subjects between groups almost the same. However, if the block size is 2, the allocation result of the second subject in the block can be easily predicted with a high risk of observation bias. 8 ) Therefore, the block size used should preferably be 4 or more. However, note that even when the block size is large, if the block size is known to the researcher, the risk of selection bias will increase because the treatment of the last subject in the block will be revealed. To reduce the risk of predictability from the use of one block size, the size may be varied. 9 )

Restricted randomization for unbalanced allocation

Sometimes unbalanced allocation becomes necessary for ethical or cost reasons [ 9 ]. Furthermore, if you expect a high dropout rate in a particular group, you have to allocate more subjects. For example, for patients with terminal cancer who are not treated with conventional anticancer agents, it would be both ethical and helpful to recruit those who would be more likely to receive a newly developed anticancer drug [ 10 ] (of course, contrary to expectations, the drug could be harmful).

As for simple randomization, the probability is first determined according to the ratio between the groups, and then the subjects are allocated. If the ratio between group A and group B is 2 : 1, the probability of group A is 2/3 and that of group B is 1/3. Block randomization often uses a jar model with a random allocation rule. To consider the method, first drop as many balls as the number of subjects into the jar according to the group allocation ratio (of course, the balls have different colors depending on the group). Whenever you allocate a subject, take out one ball randomly and confirm it, and do not place the ball back into the jar (random sampling without replacement). Repeat this allocation for each block.

Stratified randomization

Some studies have prognostic factors or covariates affecting the study outcome as well as treatment. Researchers hope to balance the prognostic factors between the study groups, but randomization does not eliminate all the imbalances in prognostic factors. Stratified randomization refers to the situation where the strata are based on level of prognostic factors or covariates. For example, if “sex” is the chosen prognostic factor, the number of strata is two (male and female), and randomization is applied to each stratum. When a male subject participates, the subject is first allocated to the male strata, and the group (treatment group, control group, etc.) is determined through randomization applied to the male strata. In a multicenter study, one typical prognostic factor is the “site.” This may be due to the differences in characteristics between the subjects and the manner and procedure in which the patients are treated in each hospital.

Stratification can reduce imbalances and increase statistical power, but it has certain problems. If several important prognostic factors affect the outcome, the number of strata would increase [ 11 ]. For example, 12 (2 × 2 × 3) strata are formed solely from recruitment hospitals (sites 1 and 2), sex (male and female), and age group (under 20 years, 20–64 years, and 65 years and older) ( Fig. 3 ). In case of several strata in relation to the target sample size, the number of subjects allocated to a few strata may be empty or sparse. This causes an imbalance10) 10 ) in the number of subjects allocated to the treatment group. To reduce this risk, the prognostic factors should be carefully selected. These prognostic factors should be considered again during the statistical analysis and at the end of the study.

An external file that holds a picture, illustration, etc.
Object name is kja-19049f3.jpg

Example of stratification with three prognostic factors (site, sex, and age band). Eventually, randomization with 12 strata should be accomplished using 12 separate randomization processes. C: control group, T: treatment group.

Adaptive randomization

Adaptive randomization is a method of changing the allocation probability according to the progress and position of the study. It may be used to minimize the imbalance between treatment groups as well as to change the allocation probability based on the therapeutic effect. Covariate-adaptive randomization adjusts the allocation of each subject to reduce the imbalance, taking into account the imbalance of the prognostic factors. One example is the “minimization technique of randomization (minimization)” to develop indicators that collectively determine the distributional imbalance of various prognostic factors and allocates them to minimize the imbalance.

Minimization 11 )

Minimization was first introduced as a covariate adaptive method to balance the prognostic factors [ 12 , 13 ]. The first subject is allocated through simple randomization, and the subsequent ones are allocated to balance the prognostic factors. In other words, the information of the subjects who have already participated in the study is used to allocate the newly recruited subjects and minimize the imbalance of the prognostic factors [ 14 ].

Several methods have emerged following Taves [ 13 ]. Pocock and Simon define a more general method [ 12 ]. 12 ) First, the total number of imbalances is calculated after virtually allocating a newly recruited subject to all groups, respectively. Then, each group has its own the total number of imbalances. Here, this subject will be allocated to the group with lowest total number of imbalances.

We next proceed with a virtual allocation to the recruitment hospitals (Sites 1 and 2), sex (male and female), and age band (under 20 years, 20–64 years, and 65 years or older) as prognostic factors. This study has two groups: a treatment group and a control group.

Assume that the first subject (male, 52-years-old) was recruited from Site 2. Because this subject is the first one, the allocation is determined by simple randomization.

Further, assume that the subject is allocated to a treatment group. In this group, scores are added to Site 2 of the recruiting hospital, sex Male, and the 20–64 age band ( Table 1 ). Next, assume that the second subject (female, 25-years-old) was recruited through Site 2. Calculate the total number of imbalances when this subject is allocated to the treatment group and to the control group. Add the appropriate scores to the area within each group, and sum the differences between the areas.

How Adaptive Randomization Using Minimization Works

The score in each factor is 0. The first patient (sex male, 52 yr, from site 2) is allocated to the treatment group through simple randomization. Therefore, site 2, sex male, and the 20–64 years age band in the treatment group receive the score.

First, the total number of imbalances when the subject is allocated to the control group is

The total number of imbalances when the subject is allocated to the treatment group is

Since the total number of imbalances when the subject is allocated to the control group has 1 point (< 5), the second subject is allocated to the control group, and the score is added to Site 2 of the recruiting hospital, Sex female, and the 20–64 age band in the control group ( Table 2 ). Next, the third subject (Site 1, Sex male, 17-years-old) is recruited.

The second patient has factors sex female, 25 yr, and site 2. If this patient is allocated to the control group, the total imbalance is 1. If this patient is allocated to the treatment group, the total imbalance is 5. Therefore, this patient is allocated to the control group, and site 2, sex female, and the 20–64 years age band in the control group receive the score.

Now, the total number of imbalances when the subject is allocated to the control group is

The total number of imbalances when the subject is allocated to the control group is 2 point (< 4). Therefore, the third subject is allocated to the control group, and the score is added to Site 1 of the recruiting hospital, sex male, and the < 20 age band ( Table 3 ). The subjects are allocated and scores added in this manner. Now, assume that the study continues, and the 15th subject (female, 74-years-old) is recruited from Site 2.

The third patient has factors sex male, 17 yr, and site 1. If this patient is allocated to the control group, the total imbalance is 2. If this patient is allocated to the treatment group, the total imbalance is 4. Therefore,this patient is allocated to the control group, and then site 1, sex male, and the < 20 age band in the control group receive the score.

Here, the total number of imbalances when the subject is allocated to the control group is

The total number of imbalances when the subject is allocated to the control group is lower than that when the allocation is to the treatment group (3 < 5). Therefore, the 15th subject is allocated to the control group, and the score is added to Site 2 of the recruiting hospital, female sex, and the ≥ 65 age band ( Table 4 ). If the total number of imbalances during the minimization technique is the same, the allocation is determined by simple randomization.

The 15th patient has factors sex female, 74 yr, and site 2. If this patient is allocated to the control group, the total imbalance is 3. If this patient is allocated to the treatment group, the total imbalance is 5. Therefore, this patient is allocated to the control group, and site 2, sex female, and the ≥ 65 age band in the control group receive the score.

Although minimization is designed to overcome the disadvantages of stratified randomization, this method also has drawbacks. A concern from a statistical point of view is that it does not satisfy randomness, which is the basic assumption of statistical inference [ 15 , 16 ]. For this reason, the analysis of covariance or permutation test are proposed [ 13 ]. Furthermore, exposure of the subjects’ information can lead to a certain degree of allocation prediction for the next subjects. The calculation process is complicated, but can be carried out through various programs.

Response-adaptive randomization

So far, the randomization methods is assumed that the variances of treatment effects are equal in each group. Thus, the number of subjects in both groups is determined under this assumption. However, when analyzing the data accruing as the study progresses, what happens if the variance in treatment effects is not the same? In this case, would it not reduce the number of subjects initially determined rather than the statistical power? In other words, should the allocation probabilities determined prior to the study remain constant throughout the study? Alternatively, is it possible to change the allocation probability during the study by using the data accruing as the study progresses? If the treatment effects turn out to be inferior during the study, would it be advisable to reduce the number of subjects allocated to this group [ 17 , 18 ]?

An example of response-adaptive randomization is the randomized play-the-winner rule. Here, the first subject is allocated by predefined randomization, and if this patient’s response is “success,” the next patient will be allocated to the same treatment group; otherwise, the patient will be allocated to another treatment. That is, this method is based on statistical reasoning that is not possible under a fixed allocation probability and on the ethics of allowing more patients to be allocated to treatments that benefit the patients. However, the method can lead to imbalances between the treatment groups. In addition, if clinical studies take a very long time to obtain the results of patient responses, this method cannot be recommended.

Ethics of Randomization

As noted earlier, RCT is a scientific study design based on the probability of allocating subjects to treatment groups in order to ensure comparability, form the basis of statistical inference, and identify the effects of treatment. However, an ethical debate needs to examine whether the treatment method for the subjects, especially for patients, should be determined by probability rather than by the physician. Nonetheless, the decisions should preferably be made by probability because clinical trials have the distinct goals of investigating the efficacy and safety of new medicines, medical devices, and procedures, rather than merely reach therapeutic conclusions. The purpose of the study is therefore to maintain objectivity, which is why prejudice and bias should be excluded. That is, only an unconstrained attitude during the study can confirm that a particular medicine, medical device, or procedure is effective or safe.

Consider this from another perspective. If the researcher maintains an unconstrained attitude, and the subject receives all the information, understands it, and decides to voluntarily participate, is the clinical study ethical? Unfortunately, this is not so easy to answer. Participation in a clinical study may provide the subject with the benefit of treatment, but it could be risky. Furthermore, the subjects may be given a placebo, and not treatment. Eventually, the subject may be forced to make personal sacrifices for ambiguous benefit. In other words, some subjects have to undergo further treatment, representing the cost that society has to pay for the benefit of future subjects or for a larger number of subjects [ 4 , 19 ]. This ethical dilemma on the balance between individual ethics and collective ethics [ 20 ] is still spawning much controversy. If, additionally, the researcher is biased, the controversy over this dilemma will obviously become more confused and the reliability of the study will be lowered. Therefore, randomization is a key factor in a study having to clarify causality through comparison.

Conclusions

Studies have described a random table with subsequent randomization. However, if accurate information on randomization is not provided, it would be difficult to gain enough confidence to proceed with the study and arrive at conclusions. Furthermore, probability-based treatment is allowed with the hope that the trial will be conducted through proper processes, and that the outcome will ultimately benefit the medical profession. Concurrently, it should be fully appreciated that the contribution of the subjects involved in this process is a social cost.

1) http://www.consort-statement.org/

2) However, since the results and process of randomization cannot be easily recorded, the audit of randomization is difficult.

3) Two-tailed test with difference (d) = 0.91 and type 1 error of 0.05.

4) “Cosmetic credibility” is often used.

5) The difference in number of subjects does not exceed 10% of the total number of subjects. This range is determined by a researcher, who is also able to choose 20% instead of 10%.

6) These references recommend 200 or more subjects, but it is not possible to determine the exact number.

7) Random allocation rule, truncated binomial randomization, Hadamard randomization, and the maximal procedure are forced balance randomization methods within blocks, and one of them is applied to the block. The details are beyond the scope of this study, and are therefore not covered.

8) The block size of 2 applies mainly to a study of allocating a pair at the same time.

9) Strictly speaking, the block size is randomly selected from a discrete uniform distribution, and so the use of a random block design rather than a “varying” block size would be a more formal procedure.

10) As the number of strata increases, the imbalance increases due to various factors. The details are beyond the scope of this study.

11) This paragraph introduces how to allocate “two” groups.

12) We can set the weights on the variables or the allowable range for the total number of imbalance, but in this study, we did not set any weights or allowable range for the total number of imbalances.

No potential conflict of interest relevant to this article was reported.

Authors’ contribution

Chi-Yeon Lim (Software; Supervision; Validation; Writing – review & editing)

Junyong In (Conceptualization; Software; Visualization; Writing – original draft; Writing – review & editing)

  • Open access
  • Published: 18 May 2024

Appendicular lean mass and the risk of stroke and Alzheimer’s disease: a mendelian randomization study

  • Yueli Zhu 1 , 2   na1 ,
  • Feng Zhu 1 , 2   na1 ,
  • Xiaoming Guo 3   na1 ,
  • Shunmei Huang 1 , 2 ,
  • Yunmei Yang 1 , 2 &
  • Qin Zhang 1 , 2  

BMC Geriatrics volume  24 , Article number:  438 ( 2024 ) Cite this article

238 Accesses

1 Altmetric

Metrics details

Appendicular lean mass (ALM) is a good predictive biomarker for sarcopenia. And previous studies have reported the association between ALM and stroke or Alzheimer’s disease (AD), however, the causal relationship is still unclear, The purpose of this study was to evaluate whether genetically predicted ALM is causally associated with the risk of stroke and AD by performing Mendelian randomization (MR) analyses.

A two-sample MR study was designed. Genetic variants associated with the ALM were obtained from a large genome-wide association study (GWAS) and utilized as instrumental variables (IVs). Summary-level data for stroke and AD were generated from the corresponding GWASs. We used random-effect inverse-variance weighted (IVW) as the main method for estimating causal effects, complemented by several sensitivity analyses, including the weighted median, MR-Egger, and MR-pleiotropy residual sum and outlier (MR-PRESSO) methods. Multivariable analysis was further conducted to adjust for confounding factors, including body mass index (BMI), type 2 diabetes mellitus (T2DM), low density lipoprotein-C (LDL-C), and atrial fibrillation (AF).

The present MR study indicated significant inverse associations of genetically predicted ALM with any ischemic stroke ([AIS], odds ratio [OR], 0.93; 95% confidence interval [CI], 0.89–0.97; P  = 0.002) and AD (OR, 090; 95% CI 0.85–0.96; P  = 0.001). Regarding the subtypes of AIS, genetically predicted ALM was related to the risk of large artery stroke ([LAS], OR, 0.86; 95% CI 0.77–0.95; P  = 0.005) and small vessel stroke ([SVS], OR, 0.80; 95% CI 0.73–0.89; P  < 0.001). Regarding multivariable MR analysis, ALM retained the stable effect on AIS when adjusting for BMI, LDL-C, and AF, while a suggestive association was observed after adjusting for T2DM. And the estimated effect of ALM on LAS was significant after adjustment for BMI and AF, while a suggestive association was found after adjusting for T2DM and LDL-C. Besides, the estimated effects of ALM were still significant on SVS and AD after adjustment for BMI, T2DM, LDL-C, and AF.

Conclusions

The two-sample MR analysis indicated that genetically predicted ALM was negatively related to AIS and AD. And the subgroup analysis of AIS revealed a negative causal effect of genetically predicted ALM on LAS or SVS. Future studies are required to further investigate the underlying mechanisms.

Peer Review reports

Introduction

Sarcopenia, which is characterized by loss of skeletal muscle mass and strength, is a geriatric syndrome and has been reported to be related to increased risk of many adverse outcomes, including physical disability, poor quality of life and even death [ 1 , 2 ]. And it is of great value to investigate the potential linkage between sarcopenia and aging-related diseases, which will contribute to the early diagnosis and timely interventions.

Stroke is now becoming a leading cause of mortality and disability, especially in low- and middle-income countries [ 3 ]. It has have revealed that prestroke sarcopenia can affect stroke severity in elderly patients [ 4 ]. In addition, prestroke sarcopenia was an independent predictor for poorer functional outcome at 3 months after acute stroke [ 5 ].

As for another aging-related disease, Alzheimer’s disease (AD) is the most prevalent neurodegenerative disease and the major cause of dementia. The close relationship between sarcopenia and cognitive impairment has been observed [ 6 , 7 ]. And the prevalence of cognitive impairment was 40% in patients with sarcopenia [ 6 ].

However, the causal effects of sarcopenia on stroke and AD still remain unclear, as it will be very challenging based on the inherent risk of bias due to confounding or reverse causality in the observational studies. Appendicular lean mass (ALM) is the sum of lean mass for both arms and legs and can be regarded as a major index to define sarcopenia [ 8 ]. Recently, a genome-wide association study (GWAS) identified ALM-associated single-nucleotide polymorphisms (SNPs) [ 9 ], which provided an opportunity to explore the causal associations of ALM with the risk of stroke and AD by performing Mendelian randomization (MR) analyses.

MR is a powerful approach for evaluating the causal links between clinical exposures and outcomes [ 10 ]. Genetic variants associated with the exposures are employed as instrumental variables (IVs) [ 10 ]. Since alleles are randomly assigned to the offspring and can remain constant after conception, the MR approach can avoid some limitations of conventional observational studies and reduce the influence of unmeasured confounding and reverse causality. Hence, in the present study, we aimed to use the two-sample MR analysis to elucidate the causal relationships between genetically predicted ALM and the risk of stroke subtypes (including large artery stroke [LAS], small vessel stroke [SVS], and cardioembolic stroke [CES]) as well as AD.

Study design

A two-sample MR was performed to evaluate the causal effects of ALM on the risk of stroke and AD (Fig.  1 ). The present MR study is based on three predominant assumptions [ 11 ]. First, selected SNPs are associated with ALM; second, SNPs are not associated with other confounders; third, SNPs affect the risk of stroke and AD only through ALM, but not other pathways.

figure 1

Schematic representation of Mendelian randomization analysis. SNP, single-nucleotide polymorphism; ALM, appendicular lean mass; AD, Alzheimer’s disease

Ethics approval

All analyses of this study were based on the publicly available data, and ethical approval had been obtained in the original studies.

Selection of IVs for ALM

We used a GWAS of ALM to identify independent SNPs which were significantly associated with ALM from the UK Biobank with 450,243 European ancestry participants (Table  1 ) [ 9 ]. In this GWAS, ALM was measured using bioelectrical impedance analysis (BIA) for the sum of fat-free mass at the arms and legs [ 9 ]. The total 1059 SNPs associated with ALM ( P  < 5.0 × 10 − 9 ) were obtained for the analyses, which explained 15.5% of the phenotypic variance. The F statistic was used to evaluate the weak instrument bias of each SNP using the formula equation: F = R 2 × ( N  − 2) / (1 − R 2 ), where R 2 shows the proportion of variance of ALM and N represents the sample size [ 12 ]. R 2 of each SNP was calculated by using the formula R 2  = 2 × effect allele frequency × (1 − effect allele frequency) ×Beta 2 . F statistic > 10 indicated that the selected SNP can be recommended as an indication of strong IV. These ALM-associated SNPs were further tested whether there was a linkage disequilibrium. Finally, 810 of these SNPs passed the selection criteria and were included for further MR analysis (r 2  < 0.1; region size, 3000 kb). Proxy SNPs in linkage disequilibrium (r 2  > 0.8) were searched online ( http://snipa.helmholtz-muenchen.de/snipa3/ ) and used if the ALM-associated SNPs were not available in the datasets of stroke or AD (Supplementary Table 1 ).

Outcomes data sources

Summary statistics for the association between the ALM-related genetic variants and stroke were extracted from the MEGASTROKE consortium, which included 34,217 ischemic stroke cases and 406, 111 controls with European ancestry (Table  1 ) [ 13 ]. In this GWAS study, 34,217 ischemic cases were further classified as LAS ( n  = 4373), SVS ( n  = 5386), and CES ( n  = 7193) according to the Trial of ORG 10,172 in Acute Stroke Treatment (TOAST) criteria [ 13 ]. Genetic variants were measured and imputed in dosage format using an additive genetic model with a minimum of sex and age as covariates [ 13 ].

Summary statistics for the association between the ALM-related genetic variants and AD were obtained from a GWAS meta-analysis of International Genomics of Alzheimer’s Project (IGAP) stage 1 discovery study with 21,982 cases and 41,944 cognitively normal controls with European ancestry (Table  1 ) [ 14 ]. And all these stage 1 samples were from the following four consortia: Alzheimer Disease Genetics Consortium (ADGC; consisting of 14,428 cases and 14,562 controls), Cohorts for Heart and Aging Research in Genomic Epidemiology (CHARGE; consisting of 2137 cases and 13,474 controls) consortium, The European Alzheimer’s Disease Initiative (EADI; consisting of 2240 cases and 6631 controls), and Genetic and Environmental Risk in AD/Defining Genetic, Polygenic and Environmental Risk for Alzheimer’s Disease Consortium (GERAD/PERADES; consisting of 3177 cases and 7277 controls). Age, sex, and principal components were used as covariates in the analysis [ 14 ].

Statistical analysis

We conducted the two-sample MR analyses to assess the causal associations of ALM with the risk of stroke and AD. In the main analyses, we used the random effects inverse-variance weighted (IVW) approach to estimate the causal effects. Besides, sensitivity analyses were performed to assess the robustness of the IVW results by using the weighted median, MR-Egger, and MR-pleiotropy residual sum and outlier (MR-PRESSO) methods. The weighted median method can provide valid estimates as long as at least 50% of the information in the analysis comes from valid IVs [ 15 ]. MR-Egger method was conducted to assess and adjust for the bias due to directional pleiotropy [ 16 ]. The MR-PRESSO method was used to detect outlying SNPs which are potentially horizontally pleiotropic and assess whether exclusion of these outlying SNPs influences the causal estimates [ 17 ]. Cochran’s Q statistic was utilized to assess the heterogeneity among SNPs. Heterogeneity was considered to exist if the P value of Cochran’s Q statistic was less than 0.05, and then random effects IVW approaches were used. The web-based application was used to calculate the statistical power ( http://cnsgenomics.com/shiny/mRnd/ ).

Besides, multivariable MR analysis was conducted for the purpose of adjustment for confounders [ 18 ]. The following four covariates were taken into account in the multivariable analysis, including body mass index (BMI), type 2 diabetes mellitus (T2DM), low density lipoprotein-C (LDL-C), and atrial fibrillation (AF). We used publicly available summary statistics for BMI from Hoffmann et al. [ 19 ], T2DM from Xue et al. [ 20 ], LDL-C from Hoffmann et al. [ 21 ], and AF from the Haplotype Reference Consortium (Table  1 ) [ 22 ]. The Bonferroni-corrected significance threshold was set to P  < 0.01 (corrected P value 0.05 / 5 outcomes). And a P value between 0.01 and 0.05 was defined as a suggestive association between exposure and outcome. All analyses were conducted using the TwoSampleMR [ 23 ], MendelianRandomization [ 24 ], and MR-PRESSO packages [ 17 ] in R software (Version 4.1.3).

Influence of genetically predicted ALM on the risk of stroke

There was moderate heterogeneity ( P for Cochran’s Q < 0.05) in the estimated effects of ALM on stroke and AD, but without pleiotropies ( P for intercept > 0.05) (Supplementary Table 2 ). Therefore, the multiplicative random effects IVW method was applied for more reliable estimates.

The overall IVW MR analyses revealed a negative relationship between genetically predicted ALM and the risk of any ischemic stroke ([AIS], odds ratio [OR], 0.93; 95% confidence interval [CI], 0.89–0.97; P  = 0.002; Fig.  2 ). Subgroup analysis of AIS showed that genetically predicted ALM was associated with the risk of LAS(OR, 0.86; 95% CI 0.77–0.95; P  = 0.005) and SVS (OR, 0.80; 95% CI 0.73–0.89; P  < 0.001).

figure 2

Causal effect estimates of genetically predicted ALM on stroke and AD. *MR-PRESSO outlier detected: rs4858605, rs42039, rs3184504, rs118127175 (for AIS); rs3184504, rs732716 (for LAS); rs72938315, rs10824747, rs3184504 (for SVS); rs295139, rs7633464, rs10993370 (for CES); rs4663096 (for AD). AIS, any ischemic stroke; LAS, large artery stroke; SVS, small vessel stroke; CES, cardioembolic stroke; AD, Alzheimer’s disease; SNP, single nucleotide polymorphism; OR, odds ratio; CI, confidence interval

As for sensitivity analyses, we found a significant causal association between ALM and the risk of AIS using MR-PRESSO method after excluding four potential outliers ( P  = 0.007; Fig.  2 ). The suggestive causal association was observed between genetically predicted ALM and LAS using MR-PRESSO method after excluding two potential outliers ( P  = 0.010). Besides, genetically predicted ALM was suggestively associated with the risk of SVS using weighted median and MR-Egger methods (both P  < 0.05), while the causal significant relationship was found using MR-PRESSO method after excluding three potential outliers ( P  < 0.001).

Influence of genetically predicted ALM on the risk of AD

The overall IVW MR analyses indicated a causal effect of genetically predicted ALM on the risk of AD (OR, 0.90; 95% CI 0.85–0.96; P  = 0.001; Fig.  2 ).

In the sensitivity analysis, the significant causal association was found between genetically predicted ALM and AD using MR-PRESSO method after excluding one potential outliers ( P  < 0.001), while genetically predicted ALM was suggestively associated with the risk of AD using weighted median method ( P  = 0.047).

Multivariable MR analysis

To further investigate the causal associations of genetically predicted ALM with the risk of stroke and AD, multivariable MR analyses were performed including BMI, T2DM, LDL-C, and AF.

The multivariable MR analysis revealed that genetically predicted ALM retained the stable effect on AIS when adjusting for BMI (OR, 0.93; 95% CI 0.89–0.97; P  = 0.002; Fig.  3 ), LDL-C (OR, 0.93; 95% CI 0.89–0.98; P  = 0.004), and AF (OR, 0.84; 95% CI 0.80–0.89; P  < 0.001), while a suggestive association was observed after adjusting for T2DM (OR, 0.94; 95% CI 0.89-1.00; P  = 0.046). Regarding LAS, the estimated effect of ALM was significant after adjustment for BMI (OR, 0.86; 95% CI 0.77–0.96; P  = 0.006) and AF (OR, 0.76; 95% CI 0.67–0.86; P  < 0.001), while a suggestive association was found after adjustment for T2DM (OR, 0.87; 95% CI 0.76–0.99; P  = 0.033) and LDL-C (OR, 0.88; 95% CI 0.79–0.98; P  = 0.020). The estimated effects of ALM on SVS and AD were unchanged after adjustment for BMI (OR, 0.80; 95% CI 0.73–0.89; P  < 0.001 for SVS; OR, 0.90; 95% CI 0.85–0.96; P  = 0.001 for AD ), T2DM (OR, 0.85; 95% CI 0.76–0.96; P  = 0.006 for SVS; OR, 0.91; 95% CI 0.85–0.98; P  = 0.009 for AD), LDL-C (OR, 0.82; 95% CI 0.74–0.91; P  < 0.001 for SVS; OR, 0.90; 95% CI 0.85–0.96; P  = 0.002 for AD), and AF (OR, 0.77; 95% CI 0.69–0.87; P  < 0.001 for SVS; OR, 0.90; 95% CI 0.84–0.97; P  = 0.005 for AD). Intriguingly, the association between ALM and CES was directionally inconsistent with the IVW MR analysis after adjustment for AF, which revealed a suggestive negative relationship (OR, 0.91; 95% CI 0.83–0.99; P  = 0.034).

figure 3

Multivariable Mendelian randomization analysis of the causal associations of genetically predicted ALM with the risk of stroke and AD. AIS, any ischemic stroke; LAS, large artery stroke; SVS, small vessel stroke; CES, cardioembolic stroke; AD, Alzheimer’s disease; IVW, inverse-variance weighted; BMI, body mass index; T2DM, type 2 diabetes mellitus; LDL-C, low density lipoprotein-C; AF, atrial fibrillation; OR, odds ratio; CI, confidence interval

In the present study, we conducted a two-sample MR study to investigate whether genetically predicted ALM was causally associated with the risk of stroke and AD. Our findings showed the significant negative relationship between genetically predicted ALM and the risk of AIS, LAS, SVS, and AD. Multivariable MR analysis suggested that ALM retained the stable effect on AIS when adjusting for BMI, LDL-C, and AF, while a suggestive association was observed after adjusting for T2DM. And the estimated effect of ALM on LAS was significant after adjustment for BMI and AF, while a suggestive association was found after adjusting for T2DM and LDL-C. Besides, the estimated effects of ALM were still significant on SVS and AD after adjustment for BMI, T2DM, LDL-C, and AF.

ALM is mainly determined by skeletal muscle and has a good predictive power for sarcopenia, which is mainly due to the progressive loss of skeletal muscle mass and strength [ 2 , 9 ]. In addition, ALM is highly heritable and can be a suitable trait for sarcopenia-related genetic analyses [ 25 ].

Ischemic stroke is one of the leading causes of mortality and long-term disability worldwide. It has been reported that sarcopenia was related to elevated prevalence of stroke in South korean men aged ≥ 50 years [ 26 ]. Besides, increased skeletal muscle mass may contribute to protect against silent infarction [ 27 ]. However, the relationship between genetically predicted ALM and stroke has not been explored yet. In this MR study, we found significant negative associations between ALM and the risk of AIS, LAS, and SVS. It may be attributed to chronic low-grade inflammation, which can promote the loss of muscle mass, strength, and function on account of the influences on both muscle protein breakdown and synthesis [ 28 ]. What’s more, inflammation can mediate aberrant platelet aggregation, which can stick to the surface of endothelial cells and induce local ischemia and hypoxia, even resulting in tissue death. Thus, individuals with signs of inflammation or corresponding biomarkers are considered to have an elevated risk of stroke [ 29 ]. In addition, it has been reported that there is an inverse association between peripheral lean mass and endothelial dysfunction, suggesting that low ALM may play an important role in the decline of endothelial function [ 30 ]. As we know, endothelial cells play an important role in maintaining vascular homeostasis. And vascular endothelial dysfunction is critically related to the development of cardiovascular diseases, including stroke. Therefore, chronic inflammation and vascular endothelial dysfunction are possible factors associating ALM and stroke.

And our present MR study showed a significant causal association between genetically predicted ALM and the risk of AD. As we know, it has been reported that there was an inverse relationship between lean mass and AD incidence [ 31 , 32 ]. And this relationship may be explained by several mechanisms. Chronic inflammation and oxidative stress have been proven to mediate low lean mass and AD in the elderly [ 33 ]. Besides, low muscle mass but not muscle strength, has been found to be independently related to parietal gray matter volume atrophy in middle-aged adults [ 34 ]. And the parietal lobe is involved in the early stage of AD [ 35 ], suggesting that parietal lobe involvement might lead to cognitive impairment in individuals with low muscle mass. Finally, serum brain-derived neurotrophic factor (BDNF) had a positive correlation with muscle mass [ 36 ]. And the decreased level of BDNF can lead to cognitive deterioration, while greater levels of BDNF by exercise training can increase hippocampal volume and improve cognitive function [ 37 ].

Therefore, this study provided reliable causal evidence for the protective effects of ALM on the risk of stroke and AD. Recently, a randomized controlled trial has explored a plausible multicomponent intervention based on physical activity with technological support and nutritional counselling for sarcopenia [ 38 ]. Our findings inform the development of physical interventions targeting low ALM to reduce the risk of stroke and AD.

There are several strengths in this study. One strength of this study is the MR design. We used the MR method to investigate the causal association of genetically predicted ALM with the risk of stroke and AD based on ALM-related SNPs and effects of SNPs on the outcomes from GWASs, which can reduce bias induced by residual confounding and reverse causality. Second, sensitivity analyses were applied to evaluate the robustness of our study. Third, some potential confounding factors were further analyzed by multivariable MR methods, including BMI, T2DM, LDL-C, and AF.

However, several limitations in this study should be considered. First, this study utilized ALM data from the UK Biobank (UKB), which was measured using BIA rather than DXA. As we know, BIA is an indirect measurement method to measure muscle mass and may be less accurate than DXA, which could affect the results. Second, pleiotropy, especially the horizontal pleiotropy, is generally inevitable in MR analysis which would be likely to affect the reliability of our results, despite the lack of evidence from MR-Egger and MR-PRESSO methods. Besides, multivariable MR analyses were further applied by adjusting for some confounders. However, the pleiotropy could not be fully ruled out in the MR analysis. Third, the GWAS data was mainly derived from European, and caution should be exercised when generalizing our findings to different populations, particularly those of non-European ancestry. Fourth, we used summary statistics in this study and had no access to the patient-level data. Given the different incidences of low ALM, stroke, and AD by age and sex, we believe that investigating the casual associations of ALM with the risk of stroke and AD based on different ages and sexes would be of value.

In conclusion, our two-sample MR analysis provided genetic support for the negative causal effects of genetically predicted ALM on the risk of AIS, LAS, SVS, and AD. Future studies are required to further confirm our findings and investigate the underlying mechanisms.

Data availability

The data used in this study was obtained from public databases and previous studies. Further information is available from the corresponding author upon request.

Abbreviations

  • Alzheimer’s disease

Alzheimer Disease Genetics Consortium

Atrial fibrillation

Any ischemic stroke

  • Appendicular lean mass

Brain-derived neurotrophic factor

Bioelectrical impedance analysis

Body mass index

Cardioembolic stroke

Cohorts for Heart and Aging Research in Genomic Epidemiology

European Alzheimer’s Disease Initiative

Genetic and Environmental Risk in AD/Defining Genetic, Polygenic and Environmental Risk for Alzheimer’s Disease Consortium

Genome-wide association study

International Genomics of Alzheimer’s Project

Instrumental variables

Inverse-variance weighted

Large artery stroke

Low density lipoprotein-C

  • Mendelian randomization

MR-pleiotropy residual sum and outlier

Single-nucleotide polymorphisms

Small vessel stroke

Type 2 diabetes mellitus

Trial of ORG 10172 in Acute Stroke Treatment

Cruz-Jentoft AJ, Baeyens JP, Bauer JM, Boirie Y, Cederholm T, Landi F, et al. Sarcopenia: European consensus on definition and diagnosis: report of the European Working Group on Sarcopenia in Older people. Age Ageing. 2010;39(4):412–23.

Article   PubMed   PubMed Central   Google Scholar  

Cruz-Jentoft AJ, Bahat G, Bauer J, Boirie Y, Bruyère O, Cederholm T, et al. Sarcopenia: revised European consensus on definition and diagnosis. Age Ageing. 2019;48(1):16–31.

Article   PubMed   Google Scholar  

O’Donnell MJ, Chin SL, Rangarajan S, Xavier D, Liu L, Zhang H, et al. Global and regional effects of potentially modifiable risk factors associated with acute stroke in 32 countries (INTERSTROKE): a case-control study. Lancet. 2016;388(10046):761–75.

Nozoe M, Kanai M, Kubo H, Yamamoto M, Shimada S, Mase K. Prestroke Sarcopenia and Stroke Severity in Elderly patients with Acute Stroke. J Stroke Cerebrovasc Dis. 2019;28(8):2228–31.

Nozoe M, Kanai M, Kubo H, Yamamoto M, Shimada S, Mase K. Prestroke Sarcopenia and functional outcomes in elderly patients who have had an acute stroke: a prospective cohort study. Nutrition. 2019;66:44–7.

Cabett Cipolli G, Sanches Yassuda M, Aprahamian I. Sarcopenia is Associated with cognitive impairment in older adults: a systematic review and Meta-analysis. J Nutr Health Aging. 2019;23(6):525–31.

Article   CAS   PubMed   Google Scholar  

Peng TC, Chen WL, Wu LW, Chang YW, Kao TW. Sarcopenia and cognitive impairment: a systematic review and meta-analysis. Clin Nutr. 2020;39(9):2695–701.

Studenski SA, Peters KW, Alley DE, Cawthon PM, McLean RR, Harris TB et al. The FNIH sarcopenia project: rationale, study description, conference recommendations, and final estimates. J Gerontol A Biol Sci Med Sci. 2014;69(5):547 – 58.

Pei YF, Liu YZ, Yang XL, Zhang H, Feng GJ, Wei XT, et al. The genetic architecture of appendicular lean mass characterized by association analysis in the UK Biobank study. Commun Biol. 2020;3(1):608.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Lawlor DA, Harbord RM, Sterne JA, Timpson N, Davey Smith G. Mendelian randomization: using genes as instruments for making causal inferences in epidemiology. Stat Med. 2008;27(8):1133–63.

Burgess S, Scott RA, Timpson NJ, Davey Smith G, Thompson SG, EPIC- InterAct Consortium. Using published data in mendelian randomization: a blueprint for efficient identification of causal risk factors. Eur J Epidemiol. 2015;30(7):543–52.

Burgess S, Thompson SG, CRP CHD Genetics Collaboration. Avoiding bias from weak instruments in mendelian randomization studies. Int J Epidemiol. 2011;40(3):755–64.

Malik R, Chauhan G, Traylor M, Sargurupremraj M, Okada Y, Mishra A, et al. Multiancestry genome-wide association study of 520,000 subjects identifies 32 loci associated with stroke and stroke subtypes. Nat Genet. 2018;50(4):524–37.

Kunkle BW, Grenier-Boley B, Sims R, Bis JC, Damotte V, Naj AC, et al. Genetic meta-analysis of diagnosed Alzheimer’s disease identifies new risk loci and implicates Aβ, tau, immunity and lipid processing. Nat Genet. 2019;51(3):414–30.

Bowden J, Davey Smith G, Haycock PC, Burgess S. Consistent estimation in mendelian randomization with some Invalid instruments using a weighted median estimator. Genet Epidemiol. 2016;40(4):304–14.

Burgess S, Thompson SG. Interpreting findings from mendelian randomization using the MR-Egger method. Eur J Epidemiol. 2017;32(5):377–89.

Verbanck M, Chen CY, Neale B, Do R. Detection of widespread horizontal pleiotropy in causal relationships inferred from mendelian randomization between complex traits and diseases. Nat Genet. 2018;50(5):693–98.

Burgess S, Thompson SG. Multivariable mendelian randomization: the use of pleiotropic genetic variants to estimate causal effects. Am J Epidemiol. 2015;181(4):251–60.

Hoffmann TJ, Choquet H, Yin J, Banda Y, Kvale MN, Glymour M, et al. A large multiethnic genome-wide Association study of adult body Mass Index identifies novel loci. Genetics. 2018;210(2):499–515.

Xue A, Wu Y, Zhu Z, Zhang F, Kemper KE, Zheng Z, et al. Genome-wide association analyses identify 143 risk variants and putative regulatory mechanisms for type 2 diabetes. Nat Commun. 2018;9(1):2941.

Hoffmann TJ, Theusch E, Haldar T, Ranatunga DK, Jorgenson E, Medina MW, et al. A large electronic-health-record-based genome-wide study of serum lipids. Nat Genet. 2018;50(3):401–13.

Roselli C, Chaffin MD, Weng LC, Aeschbacher S, Ahlberg G, Albert CM, et al. Multi-ethnic genome-wide association study for atrial fibrillation. Nat Genet. 2018;50(9):1225–33.

Hemani G, Zheng J, Elsworth B, Wade KH, Haberland V, Baird D, et al. The MR-Base platform supports systematic causal inference across the human phenome. Elife. 2018;7:e34408.

Yavorska OO, Burgess S. MendelianRandomization: an R package for performing mendelian randomization analyses using summarized data. Int J Epidemiol. 2017;46(6):1734–39.

Hsu FC, Lenchik L, Nicklas BJ, Lohman K, Register TC, Mychaleckyj J, et al. Heritability of body composition measured by DXA in the diabetes heart study. Obes Res. 2005;13(2):312–9.

Park S, Ham JO, Lee BK. A positive association between stroke risk and sarcopenia in men aged ≥ 50 years, but not women: results from the Korean National Health and Nutrition Examination Survey 2008–2010. J Nutr Health Aging. 2014;18(9):806–12.

Minn YK, Suk SH. Higher skeletal muscle mass may protect against ischemic stroke in community-dwelling adults without stroke and dementia: the PRESENT project. BMC Geriatr. 2017;17(1):45.

Dalle S, Rossmeislova L, Koppo K. The role of inflammation in age-related Sarcopenia. Front Physiol. 2017;8:1045.

Endres M, Moro MA, Nolte CH, Dames C, Buckwalter MS, Meisel A. Immune pathways in Etiology, Acute Phase, and chronic sequelae of ischemic stroke. Circ Res. 2022;130(8):1167–86.

Beijers HJ, Ferreira I, Bravenboer B, Henry RM, Schalkwijk CG, Dekker JM, et al. Higher central fat mass and lower peripheral lean mass are independent determinants of endothelial dysfunction in the elderly: the Hoorn study. Atherosclerosis. 2014;233(1):310–8.

Burns JM, Johnson DK, Watts A, Swerdlow RH, Brooks WM. Reduced lean mass in early Alzheimer disease and its association with brain atrophy. Arch Neurol. 2010;67(4):428–33.

Buffa R, Mereu E, Putzu P, Mereu RM, Marini E. Lower lean mass and higher percent fat mass in patients with Alzheimer’s disease. Exp Gerontol. 2014;58:30–3.

Franzoni F, Scarfò G, Guidotti S, Fusi J, Asomov M, Pruneti C. Oxidative stress and Cognitive decline: the neuroprotective role of Natural antioxidants. Front Neurosci. 2021;15:729757.

Yu JH, Kim REY, Jung JM, Park SY, Lee DY, Cho HJ, et al. Sarcopenia is associated with decreased gray matter volume in the parietal lobe: a longitudinal cohort study. BMC Geriatr. 2021;21(1):622.

Jacobs HI, Van Boxtel MP, Jolles J, Verhey FR, Uylings HB. Parietal cortex matters in Alzheimer’s disease: an overview of structural, functional and metabolic findings. Neurosci Biobehav Rev. 2012;36:297–309.

Koito Y, Yanishi M, Kimura Y, Tsukaguchi H, Kinoshita H, Matsuda T. Serum brain-derived neurotrophic factor and myostatin levels are Associated with skeletal muscle Mass in kidney transplant recipients. Transpl Proc. 2021;53(6):1939–44.

Article   CAS   Google Scholar  

Erickson KI, Voss MW, Prakash RS, Basak C, Szabo A, Chaddock L, et al. Exercise training increases size of hippocampus and improves memory. Proc Natl Acad Sci U S A. 2011;108(7):3017–22.

Bernabei R, Landi F, Calvani R, Cesari M, Del Signore S, Anker SD, et al. Multicomponent intervention to prevent mobility disability in frail older adults: randomised controlled trial (SPRINTT project). BMJ. 2022;377:e068788.

Download references

Acknowledgements

The authors sincerely thank the MEGASTROKE Consortium, the International Genomics of Alzheimer’s Project (IGAP), and UK Biobank for providing GWAS summary statistics.

This work was supported by Key R&D Program of Zhejiang (2022C03161), the National Natural Science Foundation of China (81771498 and 82200665), and the Zhejiang Provincial Natural Science Foundation of China (LY22H030009).

Author information

Yueli Zhu, Feng Zhu, Xiaoming Guo contributed equally to this work.

Authors and Affiliations

Department of Geriatrics, The First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China

Yueli Zhu, Feng Zhu, Shunmei Huang, Yunmei Yang & Qin Zhang

Key Laboratory of Diagnosis and Treatment of Aging and Physic-chemical Injury Diseases of Zhejiang Province, The First Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, Zhejiang, China

Department of Neurosurgery, The Second Affiliated Hospital, School of Medicine, Zhejiang University, Hangzhou, China

Xiaoming Guo

You can also search for this author in PubMed   Google Scholar

Contributions

Study concept and design: YLZ, QZ, YMY. Acquisition, analysis and interpretation of data: YLZ, FZ, XMG, SMH. Drafting the manuscript: YLZ. Critical revision: QZ, YMY. All authors approved the final manuscript.

Corresponding authors

Correspondence to Yunmei Yang or Qin Zhang .

Ethics declarations

Ethics approval and consent to participate, consent for publication.

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Zhu, Y., Zhu, F., Guo, X. et al. Appendicular lean mass and the risk of stroke and Alzheimer’s disease: a mendelian randomization study. BMC Geriatr 24 , 438 (2024). https://doi.org/10.1186/s12877-024-05039-5

Download citation

Received : 23 October 2023

Accepted : 02 May 2024

Published : 18 May 2024

DOI : https://doi.org/10.1186/s12877-024-05039-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

BMC Geriatrics

ISSN: 1471-2318

the purpose of randomization or random assignment

The Federal Register

The daily journal of the united states government, request access.

Due to aggressive automated scraping of FederalRegister.gov and eCFR.gov, programmatic access to these sites is limited to access to our extensive developer APIs.

If you are human user receiving this message, we can add your IP address to a set of IPs that can access FederalRegister.gov & eCFR.gov; complete the CAPTCHA (bot test) below and click "Request Access". This process will be necessary for each IP address you wish to access the site from, requests are valid for approximately one quarter (three months) after which the process may need to be repeated.

An official website of the United States government.

If you want to request a wider IP range, first request access for your current IP, and then use the "Site Feedback" button found in the lower left-hand side to make the request.

IMAGES

  1. Introduction to Random Assignment -Voxco

    the purpose of randomization or random assignment

  2. Random Assignment in Experiments

    the purpose of randomization or random assignment

  3. Random Sample v Random Assignment

    the purpose of randomization or random assignment

  4. Random Assignment in Psychology

    the purpose of randomization or random assignment

  5. The Definition of Random Assignment In Psychology

    the purpose of randomization or random assignment

  6. Random Assignment in Experiments

    the purpose of randomization or random assignment

VIDEO

  1. Introduction to Randomized Tests, Parametric Tests

  2. random sampling & assignment

  3. Random Assignment- PROGRESSIVE CREDIT- 2024 Topps Mixer Random Teams (4/30/24)

  4. Reducing randomization with random walks

  5. What is Randomized Controlled Trials (RCT)

  6. AP Statistics Exam Tip

COMMENTS

  1. Random Assignment in Experiments

    Revised on June 22, 2023. In experimental research, random assignment is a way of placing participants from your sample into different treatment groups using randomization. With simple random assignment, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

  2. Random assignment

    Random assignment or random placement is an experimental technique for assigning human participants or animal subjects to different groups in an experiment (e.g., a treatment group versus a control group) using randomization, such as by a chance procedure (e.g., flipping a coin) or a random number generator. This ensures that each participant or subject has an equal chance of being placed in ...

  3. Random Assignment in Psychology: Definition & Examples

    Random selection (also called probability sampling or random sampling) is a way of randomly selecting members of a population to be included in your study. On the other hand, random assignment is a way of sorting the sample participants into control and treatment groups. Random selection ensures that everyone in the population has an equal ...

  4. The Definition of Random Assignment In Psychology

    Random assignment refers to the use of chance procedures in psychology experiments to ensure that each participant has the same opportunity to be assigned to any given group in a study to eliminate any potential bias in the experiment at the outset. Participants are randomly assigned to different groups, such as the treatment group versus the ...

  5. 6.2 Experimental Design

    Random assignment is a method for assigning participants in a sample to the different conditions, and it is an important element of all experimental research in psychology and other fields too. In its strictest sense, random assignment should meet two criteria. One is that each participant has an equal chance of being assigned to each condition ...

  6. Why randomize?

    Randomization in this context means that care is taken to ensure that no pattern exists between the assignment of subjects into groups and any characteristics of those subjects. Every subject is as likely as any other to be assigned to the treatment (or control) group. Randomization is generally achieved by employing a computer program ...

  7. Random Assignment in Experiments

    Random assignment helps you separation causation from correlation and rule out confounding variables. As a critical component of the scientific method, experiments typically set up contrasts between a control group and one or more treatment groups. The idea is to determine whether the effect, which is the difference between a treatment group ...

  8. Randomization

    Randomization is a statistical process in which a random mechanism is employed to select a sample from a population or assign subjects to different groups. The process is crucial in ensuring the random allocation of experimental units or treatment protocols, thereby minimizing selection bias and enhancing the statistical validity. It facilitates the objective comparison of treatment effects in ...

  9. Random Assignment in Psychology (Definition + 40 Examples)

    Random Assignment is a process used in research where each participant has an equal chance of being placed in any group within the study. This technique is essential in experiments as it helps to eliminate biases, ensuring that the different groups being compared are similar in all important aspects.

  10. Random Assignment in Psychology

    The purpose of random assignment is to achieve a statistically clean sample in both groups. ... are more likely to be equally distributed if subjects are chosen randomly. Random assignment is the ...

  11. Purpose and Limitations of Random Assignment

    1. Random assignment prevents selection bias. Randomization works by removing the researcher's and the participant's influence on the treatment allocation. So the allocation can no longer be biased since it is done at random, i.e. in a non-predictable way. This is in contrast with the real world, where for example, the sickest people are ...

  12. A roadmap to using randomization in clinical trials

    With the adaptive or "in-real-time" randomization, a sequence of treatment assignments is generated dynamically as the trial progresses. For many restricted randomization procedures, the randomization rule can be expressed as Pr ( δ i + 1 = 1) = F D i, where F · is some non-increasing function of D i for any i ≥ 1.

  13. An overview of randomization techniques: An unbiased assessment of

    TYPES OF RANDOMIZATION. Many procedures have been proposed for the random assignment of participants to treatment groups in clinical trials. In this article, common randomization techniques, including simple randomization, block randomization, stratified randomization, and covariate adaptive randomization, are reviewed.

  14. Elements of Research : Random Assignment

    Random assignment is a procedure used in experiments to create multiple study groups that include participants with similar characteristics so that the groups are equivalent at the beginning of the study. The procedure involves assigning individuals to an experimental treatment or program at random, or by chance (like the flip of a coin).

  15. Issues in Outcomes Research: An Overview of Randomization Techniques

    A random number table found in a statistics book or computer-generated random numbers can also be used for simple randomization of participants. This randomization approach is simple and easy to implement in a clinical trial. In large trials (n > 200), simple randomization can be trusted to generate similar numbers of participants among groups.

  16. Random Assignment in Experiments

    Random sampling (also called probability sampling or random selection) is a way of selecting members of a population to be included in your study. In contrast, random assignment is a way of sorting the sample participants into control and experimental groups. While random sampling is used in many types of studies, random assignment is only used ...

  17. 5.5

    BMI of subjects by groups (A = blue, B = red) with and without randomized assignment of subjects to treatment groups. Note that the apparent difference between A and B for BMI disappear once proper randomization of subjects was accomplished. In conclusion, a random sample is an approach to experimental design that helps to reduce the influence ...

  18. Random sampling vs. random assignment (scope of inference)

    Random sampling Not random sampling; Random assignment: Can determine causal relationship in population. This design is relatively rare in the real world. Can determine causal relationship in that sample only. This design is where most experiments would fit. No random assignment: Can detect relationships in population, but cannot determine ...

  19. Random Sampling vs. Random Assignment

    So, to summarize, random sampling refers to how you select individuals from the population to participate in your study. Random assignment refers to how you place those participants into groups (such as experimental vs. control). Knowing this distinction will help you clearly and accurately describe the methods you use to collect your data and ...

  20. How to Do Random Allocation (Randomization)

    Purpose. To explain the concept and procedure of random allocation as used in a randomized controlled study. ... If you assign subjects into two groups A and B, you assign subjects to each group purely randomly for every assignment. Even though this is the most basic way, if the total number of samples is small, sample numbers are likely to be ...

  21. The role of randomization in clinical trials

    Abstract. Random assignment of treatments is an essential feature of experimental design in general and clinical trials in particular. It provides broad comparability of treatment groups and validates the use of statistical methods for the analysis of results. Various devices are available for improving the balance of prognostic factors across ...

  22. The association between hematologic traits and aneurysm ...

    To explore the causal association between hematologic traits and the risk of IA, we employed two-sample Mendelian randomization (MR) analysis. Two independent summary-level GWAS data were used for ...

  23. Randomization in clinical studies

    A randomized controlled trial (RCT) comparing the effects among study groups carry out to avoid any bias at the stage of the planning a study protocol. Randomization (or random allocation of subjects) can mitigate these biases with its randomness, which implies no rule or predictability for allocating subjects to treatment and control groups.

  24. Appendicular lean mass and the risk of stroke and Alzheimer's disease

    Appendicular lean mass (ALM) is a good predictive biomarker for sarcopenia. And previous studies have reported the association between ALM and stroke or Alzheimer's disease (AD), however, the causal relationship is still unclear, The purpose of this study was to evaluate whether genetically predicted ALM is causally associated with the risk of stroke and AD by performing Mendelian ...

  25. Federal Register :: Safeguarding and Securing the Open Internet

    Congress created the Commission, among other reasons, "for the purpose of the national defense." The Commission's national security responsibilities are well established. Presidential Policy Directive 21 (PPD-21) describes the Commission's roles as including "identifying communications sector vulnerabilities and working with industry and ...