Research Paper Guide

Types Of Qualitative Research

Nova A.

8 Types of Qualitative Research - Overview & Examples

16 min read

Published on: Dec 29, 2017

Last updated on: Oct 25, 2023

types of qualitative research

People also read

Research Paper Writing - A Step by Step Guide

Research Paper Example - APA and MLA Format

Guide to Creating Effective Research Paper Outline

Interesting Research Paper Topics for 2023

Research Proposal Writing - A Step-by-Step Guide

How to Start a Research Paper - 7 Easy Steps

How to Write an Abstract - A Step by Step Guide

Learn How To Write A Literature Review In Simple Steps

Qualitative Research - Methods, Types, and Examples

Qualitative vs Quantitative Research - Learning the Basics

Psychology Research Topics - 220+ Ideas

How to Write a Hypothesis In 7 simple Steps: Examples and Tips!

20+ Types of Research With Examples - A Detailed Guide

Understanding Quantitative Research - Types & Data Collection Techniques

230+ Sociology Research Topics & Ideas for Students

How to Cite a Research Paper - A Complete Guide

Excellent History Research Paper Topics- 300+ Ideas

How to Write a Research Methodology for a Research Paper

Share this article

Are you overwhelmed by the multitude of qualitative research methods available? It's no secret that choosing the right approach can leave you stuck at the starting line of your research.

Selecting an unsuitable method can lead to wasted time, resources, and potentially skewed results. But with so many options to consider, it's easy to feel lost in the complexities of qualitative research.

In this comprehensive guide, we will explain the types of qualitative research, their unique characteristics, advantages, and best use cases for each method.

Let's dive in!

Order Essay

Paper Due? Why Suffer? That's our Job!

On This Page On This Page

What is Qualitative Research?

Qualitative research is a robust and flexible methodology used to explore and understand complex phenomena in-depth. 

Unlike quantitative research , qualitative research dives into the rich and complex aspects of human experiences, behaviors, and perceptions.

At its core, this type of research question seek to answer for:

  • Why do people think or behave a certain way?
  • What are the underlying motivations and meanings behind actions?
  • How do individuals perceive and interpret the world around them?

This approach values context, diversity, and the unique perspectives of participants. 

Rather than seeking generalizable findings applicable to a broad population, qualitative research aims for detailed insights, patterns, and themes that come from the people being studied.

Characteristics of Qualitative Research 

Qualitative research possesses the following characteristics: 

  • Subjective Perspective: Qualitative research explores subjective experiences, emphasizing the uniqueness of human behavior and opinions.
  • In-Depth Exploration: It involves deep investigation, allowing a comprehensive understanding of specific phenomena.
  • Open-Ended Questions: Qualitative research uses open-ended questions to encourage detailed, descriptive responses.
  • Contextual Understanding: It emphasizes the importance of understanding the research context and setting.
  • Rich Descriptions: Qualitative research produces rich, descriptive findings that contribute to a nuanced understanding of the topic.

Types of Qualitative Research Methods

Researchers collect data on the targeted population, place, or event by using different types of qualitative research analysis.

Each qualitative research method offers a distinct perspective, enabling researchers to reveal concealed meanings, patterns, and valuable insights.

Below are the most commonly used qualitative research types for writing a paper.

Ethnographic Research Method 

Ethnography, a subfield of anthropology, provides a scientific approach to examining human societies and cultures. It ranks among the most widely employed qualitative research techniques.

In ethnographic field notes, researchers actively engage with the environment and live alongside the focus group. 

This immersive interaction allows researchers to gain insights into the objectives, motivations, challenges, and distinctive cultural attributes of the individuals under study.

Key cultural characteristics that ethnography helps to illustrate encompass:

  • Geographical Location
  • Religious Practices
  • Tribal Systems
  • Shared Experiences

Unlike traditional survey and interview-based research methods, ethnographers don't rely on structured questioning. 

Instead, they become observers within the community, emphasizing participant observation over an extended period. However, it may also be appropriate to complement observations with interviews of individuals who possess knowledge of the culture.

Ethnographic research can present challenges if the researcher is unfamiliar with the social norms and language of the group being studied. 

Furthermore, interpretations made by outsiders may lead to misinterpretations or confusion. Therefore, thorough validation of data is essential before presenting findings.

Narrative Method 

The narrative research design unfolds over an extended period to compile data, much like crafting a cohesive story. Similar to a narrative structure, it begins with a starting point and progresses through various life situations.

In this method, researchers engage in in-depth interviews and review relevant documents. They explore events that have had a significant impact on an individual's personality and life journey. Interviews may occur over weeks, months, or even years, depending on the depth and scope of the narrative being studied.

The outcome of narrative research is the presentation of a concise story that captures essential themes, conflicts, and challenges. It provides a holistic view of the individual's experiences, both positive and negative, which have shaped their unique narrative.

Phenomenological Method 

The term "phenomenological" pertains to the study of phenomena, which can encompass events, situations, or experiences. 

This method is ideal for examining a subject from multiple perspectives and contributing to existing knowledge, with a particular focus on subjective experiences.

Researchers employing the phenomenological method use various data collection techniques, including interviews, site visits, observations, surveys, and document reviews. 

These methods help gather rich and diverse data about the phenomenon under investigation.

A central aspect of this technique is capturing how participants experience events or activities, delving into their subjective viewpoints. Ultimately, the research results in the creation of a thematic database that validates the findings and offers insights from the subject's perspective.

Grounded Theory Method

A grounded theory approach differs from a phenomenological study in that it seeks to explain, provide reasons for, or develop theories behind an event or phenomenon. 

It serves as a means to construct new theories by systematically collecting and analyzing data related to a specific phenomenon.

Researchers employing the grounded theory method utilize a variety of data collection techniques, including observation, interviews, literature review , and the analysis of relevant documents. 

The focus of content analysis is not individual behaviors but a specific phenomenon or incident.

This method typically involves various coding techniques and large sample sizes to identify themes and develop more comprehensive theories.

Case Study Research 

The case study approach entails a comprehensive examination of a subject over an extended period, with a focus on providing detailed insights into the subject, which can be an event, person, business, or place.

Data for case studies is collected from diverse sources, including interviews, direct observation, historical records, and documentation.

Case studies find applications across various disciplines, including law, education, medicine, and the sciences. They can serve both descriptive and explanatory purposes, making them a versatile research methodology .

Researchers often turn to the case study method when they want to explore:

  • 'How' and 'why' research questions
  • Behaviors under observation
  • Understanding a specific phenomenon
  • The contextual factors influencing the phenomena

Historical Method

The historical method aims to describe and analyze past events, offering insights into present patterns and the potential to predict future scenarios. 

Researchers formulate research questions based on a hypothetical idea and then rigorously test this idea using multiple historical resources.

Key steps in the historical method include:

  • Developing a research idea
  • Identifying appropriate sources such as archives and libraries
  • Ensuring the reliability and validity of these sources
  • Creating a well-organized research outline
  • Systematically collecting research data

The analysis phase involves critically assessing the collected data, accepting or rejecting it based on credibility, and identifying any conflicting evidence.

Ultimately, the outcomes of the historical method are presented in the form of a biography or a scholarly paper that provides a comprehensive account of the research findings.

Action Research 

Action research is a dynamic research approach focused on addressing practical challenges in real-world settings while simultaneously conducting research to improve the situation. 

It follows a cyclic process, starting with the identification of a specific issue or problem in a particular context.

The key steps in action research include:

  • Planning and implementing actions to address the issue
  • Collecting data during the action phase to understand its impact
  • Reflecting on the data and analyzing it to gain insights
  • Adjusting the action plan based on the analysis

This process may be iterative, with multiple cycles of action and reflection.

The outcomes of action research are practical solutions and improved practices that directly benefit the context in which the research is conducted. Additionally, it leads to a deeper and more nuanced understanding of the issue under investigation.

Focus Groups 

Focus groups are a qualitative research method used to gather in-depth insights and perspectives on a specific topic or research question. 

This approach involves assembling a small group of participants who possess relevant knowledge or experiences related to the research focus.

Key steps in the focus group method include:

  • Selecting participants
  • Moderating the discussion
  • Structuring the conversation around open-ended questions
  • Collecting data through audio or video recordings and note-taking 

The discussion is dynamic and interactive, encouraging participants to share their thoughts, experiences, and opinions.

The analysis phase involves reviewing the data collected from the focus group discussion to identify common themes, patterns, and valuable insights. Focus groups provide rich qualitative data that offer a deeper and more nuanced understanding of the research topic or question.

Tough Essay Due? Hire Tough Writers!

Types of Data Analysis in Qualitative Research 

Qualitative research employs different data analysis methods, each suited to specific research goals:

  • Thematic Analysis: Identifies recurring themes or concepts within data.
  • Content Analysis: Systematically categorizes and quantifies text or media content.
  • Narrative Analysis: Focuses on storytelling and narrative elements in data.
  • Grounded Theory Analysis: Develops or refines theories based on data.
  • Discourse Analysis: Examines language and communication patterns.
  • Framework Analysis: Organizes data using predefined categories.
  • Visual Analysis: Interprets visual data like photos or videos.
  • Cross-case Analysis: Compares patterns across multiple cases.

The choice depends on research questions and data type, enhancing understanding and insights.

Benefits of Qualitative Research 

Qualitative research offers valuable advantages, including:

  • Flexibility: Adaptable to various research questions and settings.
  • Holistic Approach: Explores multiple dimensions of phenomena.
  • Theory Development: Contributes to theory creation or refinement.
  • Participant Engagement: Fosters active participant involvement.
  • Complements Quantitative Research: Provides a comprehensive understanding.

All in all, different types of qualitative research methodology can assist in understanding the behavior and motivations of people. Similarly, it will also help in generating original ideas and formulating a better research problem.

However, not everyone can write a good research paper. Thus, if you get stuck at any stage, you can get professional help. is the best custom writing paper service , where you can hire a professional writer. 

We assure you that you will receive high-quality paper at the most reasonable rates.

Contact our team with your " pay for my research paper " queries. We are available 24/7!

Nova A. (Literature, Marketing)

Nova Allison is a Digital Content Strategist with over eight years of experience. Nova has also worked as a technical and scientific writer. She is majorly involved in developing and reviewing online content plans that engage and resonate with audiences. Nova has a passion for writing that engages and informs her readers.

Paper Due? Why Suffer? That’s our Job!

Get Help

Keep reading

types of qualitative research

We value your privacy

We use cookies to improve your experience and give you personalized content. Do you agree to our cookie policy?

Website Data Collection

We use data collected by cookies and JavaScript libraries.

Are you sure you want to cancel?

Your preferences have not been saved.

  • How it works

How to Write a Research Design – Guide with Examples

Published by Alaxendra Bets at August 14th, 2021 , Revised On October 3, 2023

A research design is a structure that combines different components of research. It involves the use of different data collection and data analysis techniques logically to answer the  research questions .

It would be best to make some decisions about addressing the research questions adequately before starting the research process, which is achieved with the help of the research design.

Below are the key aspects of the decision-making process:

  • Data type required for research
  • Research resources
  • Participants required for research
  • Hypothesis based upon research question(s)
  • Data analysis  methodologies
  • Variables (Independent, dependent, and confounding)
  • The location and timescale for conducting the data
  • The time period required for research

The research design provides the strategy of investigation for your project. Furthermore, it defines the parameters and criteria to compile the data to evaluate results and conclude.

Your project’s validity depends on the data collection and  interpretation techniques.  A strong research design reflects a strong  dissertation , scientific paper, or research proposal .

Steps of research design

Step 1: Establish Priorities for Research Design

Before conducting any research study, you must address an important question: “how to create a research design.”

The research design depends on the researcher’s priorities and choices because every research has different priorities. For a complex research study involving multiple methods, you may choose to have more than one research design.

Multimethodology or multimethod research includes using more than one data collection method or research in a research study or set of related studies.

If one research design is weak in one area, then another research design can cover that weakness. For instance, a  dissertation analyzing different situations or cases will have more than one research design.

For example:

  • Experimental research involves experimental investigation and laboratory experience, but it does not accurately investigate the real world.
  • Quantitative research is good for the  statistical part of the project, but it may not provide an in-depth understanding of the  topic .
  • Also, correlational research will not provide experimental results because it is a technique that assesses the statistical relationship between two variables.

While scientific considerations are a fundamental aspect of the research design, It is equally important that the researcher think practically before deciding on its structure. Here are some questions that you should think of;

  • Do you have enough time to gather data and complete the write-up?
  • Will you be able to collect the necessary data by interviewing a specific person or visiting a specific location?
  • Do you have in-depth knowledge about the  different statistical analysis and data collection techniques to address the research questions  or test the  hypothesis ?

If you think that the chosen research design cannot answer the research questions properly, you can refine your research questions to gain better insight.

Step 2: Data Type you Need for Research

Decide on the type of data you need for your research. The type of data you need to collect depends on your research questions or research hypothesis. Two types of research data can be used to answer the research questions:

Primary Data Vs. Secondary Data

Qualitative vs. quantitative data.

Also, see; Research methods, design, and analysis .

Need help with a thesis chapter?

  • Hire an expert from Research Prospect today!
  • Statistical analysis, research methodology, discussion of the results or conclusion – our experts can help you no matter how complex the requirements are.

analysis image

Step 3: Data Collection Techniques

Once you have selected the type of research to answer your research question, you need to decide where and how to collect the data.

It is time to determine your research method to address the  research problem . Research methods involve procedures, techniques, materials, and tools used for the study.

For instance, a dissertation research design includes the different resources and data collection techniques and helps establish your  dissertation’s structure .

The following table shows the characteristics of the most popularly employed research methods.

Research Methods

Step 4: Procedure of Data Analysis

Use of the  correct data and statistical analysis technique is necessary for the validity of your research. Therefore, you need to be certain about the data type that would best address the research problem. Choosing an appropriate analysis method is the final step for the research design. It can be split into two main categories;

Quantitative Data Analysis

The quantitative data analysis technique involves analyzing the numerical data with the help of different applications such as; SPSS, STATA, Excel, origin lab, etc.

This data analysis strategy tests different variables such as spectrum, frequencies, averages, and more. The research question and the hypothesis must be established to identify the variables for testing.

Qualitative Data Analysis

Qualitative data analysis of figures, themes, and words allows for flexibility and the researcher’s subjective opinions. This means that the researcher’s primary focus will be interpreting patterns, tendencies, and accounts and understanding the implications and social framework.

You should be clear about your research objectives before starting to analyze the data. For example, you should ask yourself whether you need to explain respondents’ experiences and insights or do you also need to evaluate their responses with reference to a certain social framework.

Step 5: Write your Research Proposal

The research design is an important component of a research proposal because it plans the project’s execution. You can share it with the supervisor, who would evaluate the feasibility and capacity of the results  and  conclusion .

Read our guidelines to write a research proposal  if you have already formulated your research design. The research proposal is written in the future tense because you are writing your proposal before conducting research.

The  research methodology  or research design, on the other hand, is generally written in the past tense.

How to Write a Research Design – Conclusion

A research design is the plan, structure, strategy of investigation conceived to answer the research question and test the hypothesis. The dissertation research design can be classified based on the type of data and the type of analysis.

Above mentioned five steps are the answer to how to write a research design. So, follow these steps to  formulate the perfect research design for your dissertation .

Research Prospect writers have years of experience creating research designs that align with the dissertation’s aim and objectives. If you are struggling with your dissertation methodology chapter, you might want to look at our dissertation part-writing service.

Our dissertation writers can also help you with the full dissertation paper . No matter how urgent or complex your need may be, Research Prospect can help. We also offer PhD level research paper writing services.

Frequently Asked Questions

What is research design.

Research design is a systematic plan that guides the research process, outlining the methodology and procedures for collecting and analysing data. It determines the structure of the study, ensuring the research question is answered effectively, reliably, and validly. It serves as the blueprint for the entire research project.

How to write a research design?

To write a research design, define your research question, identify the research method (qualitative, quantitative, or mixed), choose data collection techniques (e.g., surveys, interviews), determine the sample size and sampling method, outline data analysis procedures, and highlight potential limitations and ethical considerations for the study.

How to write the design section of a research paper?

In the design section of a research paper, describe the research methodology chosen and justify its selection. Outline the data collection methods, participants or samples, instruments used, and procedures followed. Detail any experimental controls, if applicable. Ensure clarity and precision to enable replication of the study by other researchers.

How to write a research design in methodology?

To write a research design in methodology, clearly outline the research strategy (e.g., experimental, survey, case study). Describe the sampling technique, participants, and data collection methods. Detail the procedures for data collection and analysis. Justify choices by linking them to research objectives, addressing reliability and validity.

You May Also Like

Make sure that your selected topic is intriguing, manageable, and relevant. Here are some guidelines to help understand how to find a good dissertation topic.

Find how to write research questions with the mentioned steps required for a perfect research question. Choose an interesting topic and begin your research.

Here we explore what is research problem in dissertation with research problem examples to help you understand how and when to write a research problem.

Ready to place an order?

Useful links, learning resources. Protection Status



  • How It Works

Uncomplicated Reviews of Educational Research Methods

  • Qualitative Research Design

.pdf version of this page

This review provides an overview of qualitative methods and designs using examples of research. Note that qualitative researchers frequently employ  several methods in a single study.

Basic Qualitative Research Characteristics

  • Design is generally based on a social constructivism perspective.
  • Research problems become research questions based on prior research experience.
  • Sample sizes can be as small as one.
  • Data collection involves interview, observation, and/or archival (content) data.
  • Interpretation is based on a combination of researcher perspective and data collected.
  • Transcribing is the process of converting audio or video data to text for analysis.
  • Coding is the process of reviewing notes and discovering common “themes.”
  • Themes describe the patterns/phenomenon as results.

Overview of Methods

1. Interview (Individual, focus groups)

What is the difference between an interview and a survey? Primarily, open-ended questions differentiate the two. Qualitative researchers are concerned with making inference based on perspective, so it is extremely important to get as much data as possible for later analysis. Researchers spend a considerable amount of time designing interview questions. Interviews are designed to generate participant perspectives about ideas, opinions, and experiences.

2. Observation (Individual, group, location)

How is data derived from an observation? The researcher may use a variety of methods for observing, including taking general notes, using checklists, or time-and-motion logs. The considerable time it takes for even a short observation deters many researchers from using this method. Also, the researcher risks his or her interpretation when taking notes, which is accepted by qualitative researchers, but meets resistance from post-positivists . Observations are designed to generate data on activities and behaviors, and are generally more focused on setting than other methods.

3. Document Analysis (Content analysis of written data)

What types of documents do qualitative researchers analyze? Virtually anything that supports the question asked. Print media has long been a staple data source for qualitative researchers, but electronic media (email, blogs, user Web pages, and even social network profiles) have extended the data qualitative researchers can collect and analyze. The greatest challenge offered by document analysis can be sifting through all of the data to make general observations.

A Few Qualitative Research Designs

1. Biographical Study

A biographical study is often the first design type that comes to mind for most people. For example, consider O’Brien’s John F. Kennedy: A Biography . The author takes a collection of archival documents (interviews, speeches, and other writings) and various media (pictures, audio, and video footage) to present a comprehensive story of JFK. In the general sense, a biographical study is considered an exhaustive account of a life experience; however, just as some studies are limited to single aspects of a phenomenon, the focus of a biographical study can be much narrower. The film Madame Curie is an example. Crawford studies the film from a biographical perspective to present the reader with an examination of how all aspects of a film (director’s perspective, actors, camera angles, historical setting) work to present a biography. Read the introduction and scan the text to get a feel for this perspective.

2. Phenomenology

Your first step should be to take this word apart – phenomenon refers to an occurrence or experience, logical refers to a path toward understanding. So, we have a occurrence and a path (let’s go with an individual’s experience), which leads to a way of looking at the phenomenon from an individual’s point of view. The reactions, perceptions, and feelings of an individual (or group of individuals) as she/he experienced an event are principally important to the phenomenologist looking to understand an event beyond purely quantitative details. Gaston-Gayles, et al.’s (2005) look at how the civil rights era changed the role of college administrators is a good example. The authors interview men and women who were administrators during that time to identify how the profession changed as a result.

3. Grounded Theory

In a grounded theory study, interpretations are continually derived from raw data. A keyword to remember is emergent . The story emerges from the data. Often, researchers will begin with a broad topic, then use qualitative methods to gather information that defines (or further refines) a research question. For example, a teacher might want to know what effects the implementation of a dress code might have on discipline. Instead of formulating specific questions, a grounded theorist would begin by interviewing students, parents, and/or teachers, and perhaps asking students to write an essay about their thoughts on a dress code. The researcher would then follow the process of developing themes from reading the text by coding specific examples (using a highlighter, maybe) of where respondents mentioned common things. Resistance might be a common pattern emerging from the text, which may then become a topic for further analysis.

A grounded theory study is dynamic, in that it can be continually revised throughout nearly all phases of the study. You can imagine that this would frustrate a quantitative researcher. However, remember that perspective is centrally important to the qualitative researcher. While the end result of a grounded theory study is to generate some broad themes, the researcher is not making an attempt to generalize the study in the same, objective way characteristic of quantitative research. Here is a link to a grounded theory article on student leadership .

4. Ethnography

Those with sociology or anthropology backgrounds will be most familiar with this design. Ethnography focuses on meaning, largely through direct field observation. Researchers generally (though not always) become part of a culture that they wish to study, then present a picture of that culture through the “eyes” of its members. One of the most famous ethnographers is Jane Goodall, who studied chimpanzees by living among them in their native East African habitat.

5. Case Study

A case study is an in-depth analysis of people, events, and relationships, bounded by some unifying factor. An example is principal leadership in middle schools. Important aspects include not only the principal’s behaviors and views on leadership, but also the perceptions of those who interact with her/him, the context of the school, outside constituents, comparison to other principals, and other quantitative “variables.” Often, you may see a case study labeled “ethnographic case study” which generally refers to a more comprehensive study focused on a person or group of people, as the above example.

Case studies do not have to be people-focused, however, as a case study to look at a program might be conducted to see how it accomplishes its intended outcomes. For example, the Department of Education might conduct a case study on a curricular implementation in a school district – examining how new curriculum moves from development to implementation to outcomes at each level of interaction (developer, school leadership, teacher, student).

Share this:

About research rundowns.

Research Rundowns was made possible by support from the Dewar College of Education at Valdosta State University .

  • Experimental Design
  • What is Educational Research?
  • Writing Research Questions
  • Mixed Methods Research Designs
  • Qualitative Coding & Analysis
  • Correlation
  • Effect Size
  • Instrument, Validity, Reliability
  • Mean & Standard Deviation
  • Significance Testing (t-tests)
  • Steps 1-4: Finding Research
  • Steps 5-6: Analyzing & Organizing
  • Steps 7-9: Citing & Writing
  • Writing a Research Report

Create a free website or blog at

' src=

  • Already have a account? Log in now.
  • Follow Following
  • Copy shortlink
  • Report this content
  • View post in Reader
  • Manage subscriptions
  • Collapse this bar

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Design | Step-by-Step Guide with Examples

Published on 5 May 2022 by Shona McCombes . Revised on 20 March 2023.

A research design is a strategy for answering your research question  using empirical data. Creating a research design means making decisions about:

  • Your overall aims and approach
  • The type of research design you’ll use
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research aims and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, frequently asked questions.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities – start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Prevent plagiarism, run a free check.

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types. Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships, while descriptive and correlational designs allow you to measure variables and describe relationships between them.

With descriptive and correlational designs, you can get a clear picture of characteristics, trends, and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analysing the data.

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study – plants, animals, organisations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region, or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalise your results to the population as a whole.

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study, your aim is to deeply understand a specific context, not to generalise to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question.

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviours, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews.

Observation methods

Observations allow you to collect data unobtrusively, observing characteristics, behaviours, or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected – for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are reliable and valid.


Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalisation means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in – for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced , while validity means that you’re actually measuring the concept you’re interested in.

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method, you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample – by mail, online, by phone, or in person?

If you’re using a probability sampling method, it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method, how will you avoid bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organising and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymise and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well organised will save time when it comes to analysing them. It can also help other researchers validate and add to your findings.

On their own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyse the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarise your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarise your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

There are many other ways of analysing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, March 20). Research Design | Step-by-Step Guide with Examples. Scribbr. Retrieved 6 November 2023, from

Is this article helpful?

Shona McCombes

Shona McCombes

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base


  • What Is Qualitative Research? | Methods & Examples

What Is Qualitative Research? | Methods & Examples

Published on June 19, 2020 by Pritha Bhandari . Revised on June 22, 2023.

Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research.

Qualitative research is the opposite of quantitative research , which involves collecting and analyzing numerical data for statistical analysis.

Qualitative research is commonly used in the humanities and social sciences, in subjects such as anthropology, sociology, education, health sciences, history, etc.

  • How does social media shape body image in teenagers?
  • How do children and adults interpret healthy eating in the UK?
  • What factors influence employee retention in a large organization?
  • How is anxiety experienced around the world?
  • How can teachers integrate social issues into science curriculums?

Table of contents

Approaches to qualitative research, qualitative research methods, qualitative data analysis, advantages of qualitative research, disadvantages of qualitative research, other interesting articles, frequently asked questions about qualitative research.

Qualitative research is used to understand how people experience the world. While there are many approaches to qualitative research, they tend to be flexible and focus on retaining rich meaning when interpreting data.

Common approaches include grounded theory, ethnography , action research , phenomenological research, and narrative research. They share some similarities, but emphasize different aims and perspectives.

Note that qualitative research is at risk for certain research biases including the Hawthorne effect , observer bias , recall bias , and social desirability bias . While not always totally avoidable, awareness of potential biases as you collect and analyze your data can prevent them from impacting your work too much.

Prevent plagiarism. Run a free check.

Each of the research approaches involve using one or more data collection methods . These are some of the most common qualitative methods:

  • Observations: recording what you have seen, heard, or encountered in detailed field notes.
  • Interviews:  personally asking people questions in one-on-one conversations.
  • Focus groups: asking questions and generating discussion among a group of people.
  • Surveys : distributing questionnaires with open-ended questions.
  • Secondary research: collecting existing data in the form of texts, images, audio or video recordings, etc.
  • You take field notes with observations and reflect on your own experiences of the company culture.
  • You distribute open-ended surveys to employees across all the company’s offices by email to find out if the culture varies across locations.
  • You conduct in-depth interviews with employees in your office to learn about their experiences and perspectives in greater detail.

Qualitative researchers often consider themselves “instruments” in research because all observations, interpretations and analyses are filtered through their own personal lens.

For this reason, when writing up your methodology for qualitative research, it’s important to reflect on your approach and to thoroughly explain the choices you made in collecting and analyzing the data.

Qualitative data can take the form of texts, photos, videos and audio. For example, you might be working with interview transcripts, survey responses, fieldnotes, or recordings from natural settings.

Most types of qualitative data analysis share the same five steps:

  • Prepare and organize your data. This may mean transcribing interviews or typing up fieldnotes.
  • Review and explore your data. Examine the data for patterns or repeated ideas that emerge.
  • Develop a data coding system. Based on your initial ideas, establish a set of codes that you can apply to categorize your data.
  • Assign codes to the data. For example, in qualitative survey analysis, this may mean going through each participant’s responses and tagging them with codes in a spreadsheet. As you go through your data, you can create new codes to add to your system if necessary.
  • Identify recurring themes. Link codes together into cohesive, overarching themes.

There are several specific approaches to analyzing qualitative data. Although these methods share similar processes, they emphasize different concepts.

Qualitative research often tries to preserve the voice and perspective of participants and can be adjusted as new research questions arise. Qualitative research is good for:

  • Flexibility

The data collection and analysis process can be adapted as new ideas or patterns emerge. They are not rigidly decided beforehand.

  • Natural settings

Data collection occurs in real-world contexts or in naturalistic ways.

  • Meaningful insights

Detailed descriptions of people’s experiences, feelings and perceptions can be used in designing, testing or improving systems or products.

  • Generation of new ideas

Open-ended responses mean that researchers can uncover novel problems or opportunities that they wouldn’t have thought of otherwise.

A faster, more affordable way to improve your paper

Scribbr’s new AI Proofreader checks your document and corrects spelling, grammar, and punctuation mistakes with near-human accuracy and the efficiency of AI!

example of research design in qualitative research paper

Proofread my paper

Researchers must consider practical and theoretical limitations in analyzing and interpreting their data. Qualitative research suffers from:

  • Unreliability

The real-world setting often makes qualitative research unreliable because of uncontrolled factors that affect the data.

  • Subjectivity

Due to the researcher’s primary role in analyzing and interpreting data, qualitative research cannot be replicated . The researcher decides what is important and what is irrelevant in data analysis, so interpretations of the same data can vary greatly.

  • Limited generalizability

Small samples are often used to gather detailed data about specific contexts. Despite rigorous analysis procedures, it is difficult to draw generalizable conclusions because the data may be biased and unrepresentative of the wider population .

  • Labor-intensive

Although software can be used to manage and record large amounts of text, data analysis often has to be checked or performed manually.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square goodness of fit test
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Inclusion and exclusion criteria

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organization to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). What Is Qualitative Research? | Methods & Examples. Scribbr. Retrieved November 6, 2023, from

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, qualitative vs. quantitative research | differences, examples & methods, how to do thematic analysis | step-by-step guide & examples, what is your plagiarism score.

  • USC Libraries
  • Research Guides

Organizing Your Social Sciences Research Paper

  • Qualitative Methods
  • Purpose of Guide
  • Design Flaws to Avoid
  • Independent and Dependent Variables
  • Glossary of Research Terms
  • Reading Research Effectively
  • Narrowing a Topic Idea
  • Broadening a Topic Idea
  • Extending the Timeliness of a Topic Idea
  • Academic Writing Style
  • Choosing a Title
  • Making an Outline
  • Paragraph Development
  • Research Process Video Series
  • Executive Summary
  • The C.A.R.S. Model
  • Background Information
  • The Research Problem/Question
  • Theoretical Framework
  • Citation Tracking
  • Content Alert Services
  • Evaluating Sources
  • Primary Sources
  • Secondary Sources
  • Tiertiary Sources
  • Scholarly vs. Popular Publications
  • Quantitative Methods
  • Insiderness
  • Using Non-Textual Elements
  • Limitations of the Study
  • Common Grammar Mistakes
  • Writing Concisely
  • Avoiding Plagiarism
  • Footnotes or Endnotes?
  • Further Readings
  • Generative AI and Writing
  • USC Libraries Tutorials and Other Guides
  • Bibliography

The word qualitative implies an emphasis on the qualities of entities and on processes and meanings that are not experimentally examined or measured [if measured at all] in terms of quantity, amount, intensity, or frequency. Qualitative researchers stress the socially constructed nature of reality, the intimate relationship between the researcher and what is studied, and the situational constraints that shape inquiry. Such researchers emphasize the value-laden nature of inquiry. They seek answers to questions that stress how social experience is created and given meaning. In contrast, quantitative studies emphasize the measurement and analysis of causal relationships between variables, not processes. Qualitative forms of inquiry are considered by many social and behavioral scientists to be as much a perspective on how to approach investigating a research problem as it is a method.

Denzin, Norman. K. and Yvonna S. Lincoln. “Introduction: The Discipline and Practice of Qualitative Research.” In The Sage Handbook of Qualitative Research . Norman. K. Denzin and Yvonna S. Lincoln, eds. 3 rd edition. (Thousand Oaks, CA: Sage, 2005), p. 10.

Characteristics of Qualitative Research

Below are the three key elements that define a qualitative research study and the applied forms each take in the investigation of a research problem.

  • Naturalistic -- refers to studying real-world situations as they unfold naturally; non-manipulative and non-controlling; the researcher is open to whatever emerges [i.e., there is a lack of predetermined constraints on findings].
  • Emergent -- acceptance of adapting inquiry as understanding deepens and/or situations change; the researcher avoids rigid designs that eliminate responding to opportunities to pursue new paths of discovery as they emerge.
  • Purposeful -- cases for study [e.g., people, organizations, communities, cultures, events, critical incidences] are selected because they are “information rich” and illuminative. That is, they offer useful manifestations of the phenomenon of interest; sampling is aimed at insight about the phenomenon, not empirical generalization derived from a sample and applied to a population.

The Collection of Data

  • Data -- observations yield a detailed, "thick description" [in-depth understanding]; interviews capture direct quotations about people’s personal perspectives and lived experiences; often derived from carefully conducted case studies and review of material culture.
  • Personal experience and engagement -- researcher has direct contact with and gets close to the people, situation, and phenomenon under investigation; the researcher’s personal experiences and insights are an important part of the inquiry and critical to understanding the phenomenon.
  • Empathic neutrality -- an empathic stance in working with study respondents seeks vicarious understanding without judgment [neutrality] by showing openness, sensitivity, respect, awareness, and responsiveness; in observation, it means being fully present [mindfulness].
  • Dynamic systems -- there is attention to process; assumes change is ongoing, whether the focus is on an individual, an organization, a community, or an entire culture, therefore, the researcher is mindful of and attentive to system and situational dynamics.

The Analysis

  • Unique case orientation -- assumes that each case is special and unique; the first level of analysis is being true to, respecting, and capturing the details of the individual cases being studied; cross-case analysis follows from and depends upon the quality of individual case studies.
  • Inductive analysis -- immersion in the details and specifics of the data to discover important patterns, themes, and inter-relationships; begins by exploring, then confirming findings, guided by analytical principles rather than rules.
  • Holistic perspective -- the whole phenomenon under study is understood as a complex system that is more than the sum of its parts; the focus is on complex interdependencies and system dynamics that cannot be reduced in any meaningful way to linear, cause and effect relationships and/or a few discrete variables.
  • Context sensitive -- places findings in a social, historical, and temporal context; researcher is careful about [even dubious of] the possibility or meaningfulness of generalizations across time and space; emphasizes careful comparative case study analysis and extrapolating patterns for possible transferability and adaptation in new settings.
  • Voice, perspective, and reflexivity -- the qualitative methodologist owns and is reflective about her or his own voice and perspective; a credible voice conveys authenticity and trustworthiness; complete objectivity being impossible and pure subjectivity undermining credibility, the researcher's focus reflects a balance between understanding and depicting the world authentically in all its complexity and of being self-analytical, politically aware, and reflexive in consciousness.

Berg, Bruce Lawrence. Qualitative Research Methods for the Social Sciences . 8th edition. Boston, MA: Allyn and Bacon, 2012; Denzin, Norman. K. and Yvonna S. Lincoln. Handbook of Qualitative Research . 2nd edition. Thousand Oaks, CA: Sage, 2000; Marshall, Catherine and Gretchen B. Rossman. Designing Qualitative Research . 2nd ed. Thousand Oaks, CA: Sage Publications, 1995; Merriam, Sharan B. Qualitative Research: A Guide to Design and Implementation . San Francisco, CA: Jossey-Bass, 2009.

Basic Research Design for Qualitative Studies

Unlike positivist or experimental research that utilizes a linear and one-directional sequence of design steps, there is considerable variation in how a qualitative research study is organized. In general, qualitative researchers attempt to describe and interpret human behavior based primarily on the words of selected individuals [a.k.a., “informants” or “respondents”] and/or through the interpretation of their material culture or occupied space. There is a reflexive process underpinning every stage of a qualitative study to ensure that researcher biases, presuppositions, and interpretations are clearly evident, thus ensuring that the reader is better able to interpret the overall validity of the research. According to Maxwell (2009), there are five, not necessarily ordered or sequential, components in qualitative research designs. How they are presented depends upon the research philosophy and theoretical framework of the study, the methods chosen, and the general assumptions underpinning the study. Goals Describe the central research problem being addressed but avoid describing any anticipated outcomes. Questions to ask yourself are: Why is your study worth doing? What issues do you want to clarify, and what practices and policies do you want it to influence? Why do you want to conduct this study, and why should the reader care about the results? Conceptual Framework Questions to ask yourself are: What do you think is going on with the issues, settings, or people you plan to study? What theories, beliefs, and prior research findings will guide or inform your research, and what literature, preliminary studies, and personal experiences will you draw upon for understanding the people or issues you are studying? Note to not only report the results of other studies in your review of the literature, but note the methods used as well. If appropriate, describe why earlier studies using quantitative methods were inadequate in addressing the research problem. Research Questions Usually there is a research problem that frames your qualitative study and that influences your decision about what methods to use, but qualitative designs generally lack an accompanying hypothesis or set of assumptions because the findings are emergent and unpredictable. In this context, more specific research questions are generally the result of an interactive design process rather than the starting point for that process. Questions to ask yourself are: What do you specifically want to learn or understand by conducting this study? What do you not know about the things you are studying that you want to learn? What questions will your research attempt to answer, and how are these questions related to one another? Methods Structured approaches to applying a method or methods to your study help to ensure that there is comparability of data across sources and researchers and, thus, they can be useful in answering questions that deal with differences between phenomena and the explanation for these differences [variance questions]. An unstructured approach allows the researcher to focus on the particular phenomena studied. This facilitates an understanding of the processes that led to specific outcomes, trading generalizability and comparability for internal validity and contextual and evaluative understanding. Questions to ask yourself are: What will you actually do in conducting this study? What approaches and techniques will you use to collect and analyze your data, and how do these constitute an integrated strategy? Validity In contrast to quantitative studies where the goal is to design, in advance, “controls” such as formal comparisons, sampling strategies, or statistical manipulations to address anticipated and unanticipated threats to validity, qualitative researchers must attempt to rule out most threats to validity after the research has begun by relying on evidence collected during the research process itself in order to effectively argue that any alternative explanations for a phenomenon are implausible. Questions to ask yourself are: How might your results and conclusions be wrong? What are the plausible alternative interpretations and validity threats to these, and how will you deal with these? How can the data that you have, or that you could potentially collect, support or challenge your ideas about what’s going on? Why should we believe your results? Conclusion Although Maxwell does not mention a conclusion as one of the components of a qualitative research design, you should formally conclude your study. Briefly reiterate the goals of your study and the ways in which your research addressed them. Discuss the benefits of your study and how stakeholders can use your results. Also, note the limitations of your study and, if appropriate, place them in the context of areas in need of further research.

Chenail, Ronald J. Introduction to Qualitative Research Design. Nova Southeastern University; Heath, A. W. The Proposal in Qualitative Research. The Qualitative Report 3 (March 1997); Marshall, Catherine and Gretchen B. Rossman. Designing Qualitative Research . 3rd edition. Thousand Oaks, CA: Sage, 1999; Maxwell, Joseph A. "Designing a Qualitative Study." In The SAGE Handbook of Applied Social Research Methods . Leonard Bickman and Debra J. Rog, eds. 2nd ed. (Thousand Oaks, CA: Sage, 2009), p. 214-253; Qualitative Research Methods. Writing@CSU. Colorado State University; Yin, Robert K. Qualitative Research from Start to Finish . 2nd edition. New York: Guilford, 2015.

Strengths of Using Qualitative Methods

The advantage of using qualitative methods is that they generate rich, detailed data that leave the participants' perspectives intact and provide multiple contexts for understanding the phenomenon under study. In this way, qualitative research can be used to vividly demonstrate phenomena or to conduct cross-case comparisons and analysis of individuals or groups.

Among the specific strengths of using qualitative methods to study social science research problems is the ability to:

  • Obtain a more realistic view of the lived world that cannot be understood or experienced in numerical data and statistical analysis;
  • Provide the researcher with the perspective of the participants of the study through immersion in a culture or situation and as a result of direct interaction with them;
  • Allow the researcher to describe existing phenomena and current situations;
  • Develop flexible ways to perform data collection, subsequent analysis, and interpretation of collected information;
  • Yield results that can be helpful in pioneering new ways of understanding;
  • Respond to changes that occur while conducting the study ]e.g., extended fieldwork or observation] and offer the flexibility to shift the focus of the research as a result;
  • Provide a holistic view of the phenomena under investigation;
  • Respond to local situations, conditions, and needs of participants;
  • Interact with the research subjects in their own language and on their own terms; and,
  • Create a descriptive capability based on primary and unstructured data.

Anderson, Claire. “Presenting and Evaluating Qualitative Research.” American Journal of Pharmaceutical Education 74 (2010): 1-7; Denzin, Norman. K. and Yvonna S. Lincoln. Handbook of Qualitative Research . 2nd edition. Thousand Oaks, CA: Sage, 2000; Merriam, Sharan B. Qualitative Research: A Guide to Design and Implementation . San Francisco, CA: Jossey-Bass, 2009.

Limitations of Using Qualitative Methods

It is very much true that most of the limitations you find in using qualitative research techniques also reflect their inherent strengths . For example, small sample sizes help you investigate research problems in a comprehensive and in-depth manner. However, small sample sizes undermine opportunities to draw useful generalizations from, or to make broad policy recommendations based upon, the findings. Additionally, as the primary instrument of investigation, qualitative researchers are often embedded in the cultures and experiences of others. However, cultural embeddedness increases the opportunity for bias generated from conscious or unconscious assumptions about the study setting to enter into how data is gathered, interpreted, and reported.

Some specific limitations associated with using qualitative methods to study research problems in the social sciences include the following:

  • Drifting away from the original objectives of the study in response to the changing nature of the context under which the research is conducted;
  • Arriving at different conclusions based on the same information depending on the personal characteristics of the researcher;
  • Replication of a study is very difficult;
  • Research using human subjects increases the chance of ethical dilemmas that undermine the overall validity of the study;
  • An inability to investigate causality between different research phenomena;
  • Difficulty in explaining differences in the quality and quantity of information obtained from different respondents and arriving at different, non-consistent conclusions;
  • Data gathering and analysis is often time consuming and/or expensive;
  • Requires a high level of experience from the researcher to obtain the targeted information from the respondent;
  • May lack consistency and reliability because the researcher can employ different probing techniques and the respondent can choose to tell some particular stories and ignore others; and,
  • Generation of a significant amount of data that cannot be randomized into manageable parts for analysis.

Research Tip

Human Subject Research and Institutional Review Board Approval

Almost every socio-behavioral study requires you to submit your proposed research plan to an Institutional Review Board. The role of the Board is to evaluate your research proposal and determine whether it will be conducted ethically and under the regulations, institutional polices, and Code of Ethics set forth by the university. The purpose of the review is to protect the rights and welfare of individuals participating in your study. The review is intended to ensure equitable selection of respondents, that you have obtained adequate informed consent , that there is clear assessment and minimization of risks to participants and to the university [read: no lawsuits!], and that privacy and confidentiality are maintained throughout the research process and beyond. Go to the USC IRB website for detailed information and templates of forms you need to submit before you can proceed. If you are  unsure whether your study is subject to IRB review, consult with your professor or academic advisor.

Chenail, Ronald J. Introduction to Qualitative Research Design. Nova Southeastern University; Labaree, Robert V. "Working Successfully with Your Institutional Review Board: Practical Advice for Academic Librarians." College and Research Libraries News 71 (April 2010): 190-193.

Another Research Tip

Finding Examples of How to Apply Different Types of Research Methods

SAGE publications is a major publisher of studies about how to design and conduct research in the social and behavioral sciences. Their SAGE Research Methods Online and Cases database includes contents from books, articles, encyclopedias, handbooks, and videos covering social science research design and methods including the complete Little Green Book Series of Quantitative Applications in the Social Sciences and the Little Blue Book Series of Qualitative Research techniques. The database also includes case studies outlining the research methods used in real research projects. This is an excellent source for finding definitions of key terms and descriptions of research design and practice, techniques of data gathering, analysis, and reporting, and information about theories of research [e.g., grounded theory]. The database covers both qualitative and quantitative research methods as well as mixed methods approaches to conducting research.

SAGE Research Methods Online and Cases

NOTE :  For a list of online communities, research centers, indispensable learning resources, and personal websites of leading qualitative researchers, GO HERE .

For a list of scholarly journals devoted to the study and application of qualitative research methods, GO HERE .

  • << Previous: 6. The Methodology
  • Next: Quantitative Methods >>
  • Last Updated: Oct 10, 2023 1:30 PM
  • URL: site logo that links to homepage

18 Qualitative Research Examples

qualitative research examples and definition, explained below

Qualitative research is an approach to scientific research that involves using observation to gather and analyze non-numerical, in-depth, and well-contextualized datasets.

It serves as an integral part of academic, professional, and even daily decision-making processes (Baxter & Jack, 2008).

Methods of qualitative research encompass a wide range of techniques, from in-depth personal encounters, like ethnographies (studying cultures in-depth) and autoethnographies (examining one’s own cultural experiences), to collection of diverse perspectives on topics through methods like interviewing focus groups (gatherings of individuals to discuss specific topics).

Qualitative Research Examples

1. ethnography.

Definition: Ethnography is a qualitative research design aimed at exploring cultural phenomena. Rooted in the discipline of anthropology , this research approach investigates the social interactions, behaviors, and perceptions within groups, communities, or organizations.

Ethnographic research is characterized by extended observation of the group, often through direct participation, in the participants’ environment. An ethnographer typically lives with the study group for extended periods, intricately observing their everyday lives (Khan, 2014).

It aims to present a complete, detailed and accurate picture of the observed social life, rituals, symbols, and values from the perspective of the study group.

Example of Ethnographic Research

Title: “ The Everyday Lives of Men: An Ethnographic Investigation of Young Adult Male Identity “

Citation: Evans, J. (2010). The Everyday Lives of Men: An Ethnographic Investigation of Young Adult Male Identity. Peter Lang.

Overview: This study by Evans (2010) provides a rich narrative of young adult male identity as experienced in everyday life. The author immersed himself among a group of young men, participating in their activities and cultivating a deep understanding of their lifestyle, values, and motivations. This research exemplified the ethnographic approach, revealing complexities of the subjects’ identities and societal roles, which could hardly be accessed through other qualitative research designs.

Read my Full Guide on Ethnography Here

2. Autoethnography

Definition: Autoethnography is an approach to qualitative research where the researcher uses their own personal experiences to extend the understanding of a certain group, culture, or setting. Essentially, it allows for the exploration of self within the context of social phenomena.

Unlike traditional ethnography, which focuses on the study of others, autoethnography turns the ethnographic gaze inward, allowing the researcher to use their personal experiences within a culture as rich qualitative data (Durham, 2019).

The objective is to critically appraise one’s personal experiences as they navigate and negotiate cultural, political, and social meanings. The researcher becomes both the observer and the participant, intertwining personal and cultural experiences in the research.

Example of Autoethnographic Research

Title: “ A Day In The Life Of An NHS Nurse “

Citation: Osben, J. (2019). A day in the life of a NHS nurse in 21st Century Britain: An auto-ethnography. The Journal of Autoethnography for Health & Social Care. 1(1).

Overview: This study presents an autoethnography of a day in the life of an NHS nurse (who, of course, is also the researcher). The author uses the research to achieve reflexivity, with the researcher concluding: “Scrutinising my practice and situating it within a wider contextual backdrop has compelled me to significantly increase my level of scrutiny into the driving forces that influence my practice.”

Read my Full Guide on Autoethnography Here

3. Semi-Structured Interviews

Definition: Semi-structured interviews stand as one of the most frequently used methods in qualitative research. These interviews are planned and utilize a set of pre-established questions, but also allow for the interviewer to steer the conversation in other directions based on the responses given by the interviewee.

In semi-structured interviews, the interviewer prepares a guide that outlines the focal points of the discussion. However, the interview is flexible, allowing for more in-depth probing if the interviewer deems it necessary (Qu, & Dumay, 2011). This style of interviewing strikes a balance between structured ones which might limit the discussion, and unstructured ones, which could lack focus.

Example of Semi-Structured Interview Research

Title: “ Factors influencing adherence to cancer treatment in older adults with cancer: a systematic review “

Citation: Puts, M., et al. (2014). Factors influencing adherence to cancer treatment in older adults with cancer: a systematic review. Annals of oncology, 25 (3), 564-577.

Overview: Puts et al. (2014) executed an extensive systematic review in which they conducted semi-structured interviews with older adults suffering from cancer to examine the factors influencing their adherence to cancer treatment. The findings suggested that various factors, including side effects, faith in healthcare professionals, and social support have substantial impacts on treatment adherence. This research demonstrates how semi-structured interviews can provide rich and profound insights into the subjective experiences of patients.

4. Focus Groups

Definition: Focus groups are a qualitative research method that involves organized discussion with a selected group of individuals to gain their perspectives on a specific concept, product, or phenomenon. Typically, these discussions are guided by a moderator.

During a focus group session, the moderator has a list of questions or topics to discuss, and participants are encouraged to interact with each other (Morgan, 2010). This interactivity can stimulate more information and provide a broader understanding of the issue under scrutiny. The open format allows participants to ask questions and respond freely, offering invaluable insights into attitudes, experiences, and group norms.

Example of Focus Group Research

Title: “ Perspectives of Older Adults on Aging Well: A Focus Group Study “

Citation: Halaweh, H., Dahlin-Ivanoff, S., Svantesson, U., & Willén, C. (2018). Perspectives of older adults on aging well: a focus group study. Journal of aging research .

Overview: This study aimed to explore what older adults (aged 60 years and older) perceived to be ‘aging well’. The researchers identified three major themes from their focus group interviews: a sense of well-being, having good physical health, and preserving good mental health. The findings highlight the importance of factors such as positive emotions, social engagement, physical activity, healthy eating habits, and maintaining independence in promoting aging well among older adults.

5. Phenomenology

Definition: Phenomenology, a qualitative research method, involves the examination of lived experiences to gain an in-depth understanding of the essence or underlying meanings of a phenomenon.

The focus of phenomenology lies in meticulously describing participants’ conscious experiences related to the chosen phenomenon (Padilla-Díaz, 2015).

In a phenomenological study, the researcher collects detailed, first-hand perspectives of the participants, typically via in-depth interviews, and then uses various strategies to interpret and structure these experiences, ultimately revealing essential themes (Creswell, 2013). This approach focuses on the perspective of individuals experiencing the phenomenon, seeking to explore, clarify, and understand the meanings they attach to those experiences.

Example of Phenomenology Research

Title: “ A phenomenological approach to experiences with technology: current state, promise, and future directions for research ”

Citation: Cilesiz, S. (2011). A phenomenological approach to experiences with technology: Current state, promise, and future directions for research. Educational Technology Research and Development, 59 , 487-510.

Overview: A phenomenological approach to experiences with technology by Sebnem Cilesiz represents a good starting point for formulating a phenomenological study. With its focus on the ‘essence of experience’, this piece presents methodological, reliability, validity, and data analysis techniques that phenomenologists use to explain how people experience technology in their everyday lives.

6. Grounded Theory

Definition: Grounded theory is a systematic methodology in qualitative research that typically applies inductive reasoning . The primary aim is to develop a theoretical explanation or framework for a process, action, or interaction grounded in, and arising from, empirical data (Birks & Mills, 2015).

In grounded theory, data collection and analysis work together in a recursive process. The researcher collects data, analyses it, and then collects more data based on the evolving understanding of the research context. This ongoing process continues until a comprehensive theory that represents the data and the associated phenomenon emerges – a point known as theoretical saturation (Charmaz, 2014).

Example of Grounded Theory Research

Title: “ Student Engagement in High School Classrooms from the Perspective of Flow Theory “

Citation: Shernoff, D. J., Csikszentmihalyi, M., Shneider, B., & Shernoff, E. S. (2003). Student engagement in high school classrooms from the perspective of flow theory. School Psychology Quarterly, 18 (2), 158–176.

Overview: Shernoff and colleagues (2003) used grounded theory to explore student engagement in high school classrooms. The researchers collected data through student self-reports, interviews, and observations. Key findings revealed that academic challenge, student autonomy, and teacher support emerged as the most significant factors influencing students’ engagement, demonstrating how grounded theory can illuminate complex dynamics within real-world contexts.

7. Narrative Research

Definition: Narrative research is a qualitative research method dedicated to storytelling and understanding how individuals experience the world. It focuses on studying an individual’s life and experiences as narrated by that individual (Polkinghorne, 2013).

In narrative research, the researcher collects data through methods such as interviews, observations , and document analysis. The emphasis is on the stories told by participants – narratives that reflect their experiences, thoughts, and feelings.

These stories are then interpreted by the researcher, who attempts to understand the meaning the participant attributes to these experiences (Josselson, 2011).

Example of Narrative Research

Title: “Narrative Structures and the Language of the Self”

Citation: McAdams, D. P., Josselson, R., & Lieblich, A. (2006). Identity and story: Creating self in narrative . American Psychological Association.

Overview: In this innovative study, McAdams et al. (2006) employed narrative research to explore how individuals construct their identities through the stories they tell about themselves. By examining personal narratives, the researchers discerned patterns associated with characters, motivations, conflicts, and resolutions, contributing valuable insights about the relationship between narrative and individual identity.

8. Case Study Research

Definition: Case study research is a qualitative research method that involves an in-depth investigation of a single instance or event: a case. These ‘cases’ can range from individuals, groups, or entities to specific projects, programs, or strategies (Creswell, 2013).

The case study method typically uses multiple sources of information for comprehensive contextual analysis. It aims to explore and understand the complexity and uniqueness of a particular case in a real-world context (Merriam & Tisdell, 2015). This investigation could result in a detailed description of the case, a process for its development, or an exploration of a related issue or problem.

Example of Case Study Research

Title: “ Teacher’s Role in Fostering Preschoolers’ Computational Thinking: An Exploratory Case Study “

Citation: Wang, X. C., Choi, Y., Benson, K., Eggleston, C., & Weber, D. (2021). Teacher’s role in fostering preschoolers’ computational thinking: An exploratory case study. Early Education and Development , 32 (1), 26-48.

Overview: This study investigates the role of teachers in promoting computational thinking skills in preschoolers. The study utilized a qualitative case study methodology to examine the computational thinking scaffolding strategies employed by a teacher interacting with three preschoolers in a small group setting. The findings highlight the importance of teachers’ guidance in fostering computational thinking practices such as problem reformulation/decomposition, systematic testing, and debugging.

Read about some Famous Case Studies in Psychology Here

9. Participant Observation

Definition: Participant observation has the researcher immerse themselves in a group or community setting to observe the behavior of its members. It is similar to ethnography, but generally, the researcher isn’t embedded for a long period of time.

The researcher, being a participant, engages in daily activities, interactions, and events as a way of conducting a detailed study of a particular social phenomenon (Kawulich, 2005).

The method involves long-term engagement in the field, maintaining detailed records of observed events, informal interviews, direct participation, and reflexivity. This approach allows for a holistic view of the participants’ lived experiences, behaviours, and interactions within their everyday environment (Dewalt, 2011).

Example of Participant Observation Research

Title: Conflict in the boardroom: a participant observation study of supervisory board dynamics

Citation: Heemskerk, E. M., Heemskerk, K., & Wats, M. M. (2017). Conflict in the boardroom: a participant observation study of supervisory board dynamics. Journal of Management & Governance , 21 , 233-263.

Overview: This study examined how conflicts within corporate boards affect their performance. The researchers used a participant observation method, where they actively engaged with 11 supervisory boards and observed their dynamics. They found that having a shared understanding of the board’s role called a common framework, improved performance by reducing relationship conflicts, encouraging task conflicts, and minimizing conflicts between the board and CEO.

10. Non-Participant Observation

Definition: Non-participant observation is a qualitative research method in which the researcher observes the phenomena of interest without actively participating in the situation, setting, or community being studied.

This method allows the researcher to maintain a position of distance, as they are solely an observer and not a participant in the activities being observed (Kawulich, 2005).

During non-participant observation, the researcher typically records field notes on the actions, interactions, and behaviors observed , focusing on specific aspects of the situation deemed relevant to the research question.

This could include verbal and nonverbal communication , activities, interactions, and environmental contexts (Angrosino, 2007). They could also use video or audio recordings or other methods to collect data.

Example of Non-Participant Observation Research

Title: Mental Health Nurses’ attitudes towards mental illness and recovery-oriented practice in acute inpatient psychiatric units: A non-participant observation study

Citation: Sreeram, A., Cross, W. M., & Townsin, L. (2023). Mental Health Nurses’ attitudes towards mental illness and recovery‐oriented practice in acute inpatient psychiatric units: A non‐participant observation study. International Journal of Mental Health Nursing .

Overview: This study investigated the attitudes of mental health nurses towards mental illness and recovery-oriented practice in acute inpatient psychiatric units. The researchers used a non-participant observation method, meaning they observed the nurses without directly participating in their activities. The findings shed light on the nurses’ perspectives and behaviors, providing valuable insights into their attitudes toward mental health and recovery-focused care in these settings.

11. Content Analysis

Definition: Content Analysis involves scrutinizing textual, visual, or spoken content to categorize and quantify information. The goal is to identify patterns, themes, biases, or other characteristics (Hsieh & Shannon, 2005).

Content Analysis is widely used in various disciplines for a multitude of purposes. Researchers typically use this method to distill large amounts of unstructured data, like interview transcripts, newspaper articles, or social media posts, into manageable and meaningful chunks.

When wielded appropriately, Content Analysis can illuminate the density and frequency of certain themes within a dataset, provide insights into how specific terms or concepts are applied contextually, and offer inferences about the meanings of their content and use (Duriau, Reger, & Pfarrer, 2007).

Example of Content Analysis

Title: Framing European politics: A content analysis of press and television news .

Citation: Semetko, H. A., & Valkenburg, P. M. (2000). Framing European politics: A content analysis of press and television news. Journal of Communication, 50 (2), 93-109.

Overview: This study analyzed press and television news articles about European politics using a method called content analysis. The researchers examined the prevalence of different “frames” in the news, which are ways of presenting information to shape audience perceptions. They found that the most common frames were attribution of responsibility, conflict, economic consequences, human interest, and morality.

Read my Full Guide on Content Analysis Here

12. Discourse Analysis

Definition: Discourse Analysis, a qualitative research method, interprets the meanings, functions, and coherence of certain languages in context.

Discourse analysis is typically understood through social constructionism, critical theory, and poststructuralism and used for understanding how language constructs social concepts (Cheek, 2004).

Discourse Analysis offers great breadth, providing tools to examine spoken or written language, often beyond the level of the sentence. It enables researchers to scrutinize how text and talk articulate social and political interactions and hierarchies.

Insight can be garnered from different conversations, institutional text, and media coverage to understand how topics are addressed or framed within a specific social context (Jorgensen & Phillips, 2002).

Example of Discourse Analysis

Title: The construction of teacher identities in educational policy documents: A critical discourse analysis

Citation: Thomas, S. (2005). The construction of teacher identities in educational policy documents: A critical discourse analysis. Critical Studies in Education, 46 (2), 25-44.

Overview: The author examines how an education policy in one state of Australia positions teacher professionalism and teacher identities. While there are competing discourses about professional identity, the policy framework privileges a  narrative that frames the ‘good’ teacher as one that accepts ever-tightening control and regulation over their professional practice.

Read my Full Guide on Discourse Analysis Here

13. Action Research

Definition: Action Research is a qualitative research technique that is employed to bring about change while simultaneously studying the process and results of that change.

This method involves a cyclical process of fact-finding, action, evaluation, and reflection (Greenwood & Levin, 2016).

Typically, Action Research is used in the fields of education, social sciences , and community development. The process isn’t just about resolving an issue but also developing knowledge that can be used in the future to address similar or related problems.

The researcher plays an active role in the research process, which is normally broken down into four steps: 

  • developing a plan to improve what is currently being done
  • implementing the plan
  • observing the effects of the plan, and
  • reflecting upon these effects (Smith, 2010).

Example of Action Research

Title: Using Digital Sandbox Gaming to Improve Creativity Within Boys’ Writing

Citation: Ellison, M., & Drew, C. (2020). Using digital sandbox gaming to improve creativity within boys’ writing. Journal of Research in Childhood Education , 34 (2), 277-287.

Overview: This was a research study one of my research students completed in his own classroom under my supervision. He implemented a digital game-based approach to literacy teaching with boys and interviewed his students to see if the use of games as stimuli for storytelling helped draw them into the learning experience.

Read my Full Guide on Action Research Here

14. Semiotic Analysis

Definition: Semiotic Analysis is a qualitative method of research that interprets signs and symbols in communication to understand sociocultural phenomena. It stems from semiotics, the study of signs and symbols and their use or interpretation (Chandler, 2017).

In a Semiotic Analysis, signs (anything that represents something else) are interpreted based on their significance and the role they play in representing ideas.

This type of research often involves the examination of images, sounds, and word choice to uncover the embedded sociocultural meanings. For example, an advertisement for a car might be studied to learn more about societal views on masculinity or success (Berger, 2010).

Example of Semiotic Research

Title: Shielding the learned body: a semiotic analysis of school badges in New South Wales, Australia

Citation: Symes, C. (2023). Shielding the learned body: a semiotic analysis of school badges in New South Wales, Australia. Semiotica , 2023 (250), 167-190.

Overview: This study examines school badges in New South Wales, Australia, and explores their significance through a semiotic analysis. The badges, which are part of the school’s visual identity, are seen as symbolic representations that convey meanings. The analysis reveals that these badges often draw on heraldic models, incorporating elements like colors, names, motifs, and mottoes that reflect local culture and history, thus connecting students to their national identity. Additionally, the study highlights how some schools have shifted from traditional badges to modern logos and slogans, reflecting a more business-oriented approach.

15. Qualitative Longitudinal Studies

Definition: Qualitative Longitudinal Studies are a research method that involves repeated observation of the same items over an extended period of time.

Unlike a snapshot perspective, this method aims to piece together individual histories and examine the influences and impacts of change (Neale, 2019).

Qualitative Longitudinal Studies provide an in-depth understanding of change as it happens, including changes in people’s lives, their perceptions, and their behaviors.

For instance, this method could be used to follow a group of students through their schooling years to understand the evolution of their learning behaviors and attitudes towards education (Saldaña, 2003).

Example of Qualitative Longitudinal Research

Title: Patient and caregiver perspectives on managing pain in advanced cancer: a qualitative longitudinal study

Citation: Hackett, J., Godfrey, M., & Bennett, M. I. (2016). Patient and caregiver perspectives on managing pain in advanced cancer: a qualitative longitudinal study.  Palliative medicine ,  30 (8), 711-719.

Overview: This article examines how patients and their caregivers manage pain in advanced cancer through a qualitative longitudinal study. The researchers interviewed patients and caregivers at two different time points and collected audio diaries to gain insights into their experiences, making this study longitudinal.

Read my Full Guide on Longitudinal Research Here

16. Open-Ended Surveys

Definition: Open-Ended Surveys are a type of qualitative research method where respondents provide answers in their own words. Unlike closed-ended surveys, which limit responses to predefined options, open-ended surveys allow for expansive and unsolicited explanations (Fink, 2013).

Open-ended surveys are commonly used in a range of fields, from market research to social studies. As they don’t force respondents into predefined response categories, these surveys help to draw out rich, detailed data that might uncover new variables or ideas.

For example, an open-ended survey might be used to understand customer opinions about a new product or service (Lavrakas, 2008).

Contrast this to a quantitative closed-ended survey, like a Likert scale, which could theoretically help us to come up with generalizable data but is restricted by the questions on the questionnaire, meaning new and surprising data and insights can’t emerge from the survey results in the same way.

Example of Open-Ended Survey Research

Title: Advantages and disadvantages of technology in relationships: Findings from an open-ended survey

Citation: Hertlein, K. M., & Ancheta, K. (2014). Advantages and disadvantages of technology in relationships: Findings from an open-ended survey.  The Qualitative Report ,  19 (11), 1-11.

Overview: This article examines the advantages and disadvantages of technology in couple relationships through an open-ended survey method. Researchers analyzed responses from 410 undergraduate students to understand how technology affects relationships. They found that technology can contribute to relationship development, management, and enhancement, but it can also create challenges such as distancing, lack of clarity, and impaired trust.

17. Naturalistic Observation

Definition: Naturalistic Observation is a type of qualitative research method that involves observing individuals in their natural environments without interference or manipulation by the researcher.

Naturalistic observation is often used when conducting research on behaviors that cannot be controlled or manipulated in a laboratory setting (Kawulich, 2005).

It is frequently used in the fields of psychology, sociology, and anthropology. For instance, to understand the social dynamics in a schoolyard, a researcher could spend time observing the children interact during their recess, noting their behaviors, interactions, and conflicts without imposing their presence on the children’s activities (Forsyth, 2010).

Example of Naturalistic Observation Research

Title: Dispositional mindfulness in daily life: A naturalistic observation study

Citation: Kaplan, D. M., Raison, C. L., Milek, A., Tackman, A. M., Pace, T. W., & Mehl, M. R. (2018). Dispositional mindfulness in daily life: A naturalistic observation study. PloS one , 13 (11), e0206029.

Overview: In this study, researchers conducted two studies: one exploring assumptions about mindfulness and behavior, and the other using naturalistic observation to examine actual behavioral manifestations of mindfulness. They found that trait mindfulness is associated with a heightened perceptual focus in conversations, suggesting that being mindful is expressed primarily through sharpened attention rather than observable behavioral or social differences.

Read my Full Guide on Naturalistic Observation Here

18. Photo-Elicitation

Definition: Photo-elicitation utilizes photographs as a means to trigger discussions and evoke responses during interviews. This strategy aids in bringing out topics of discussion that may not emerge through verbal prompting alone (Harper, 2002).

Traditionally, Photo-Elicitation has been useful in various fields such as education, psychology, and sociology. The method involves the researcher or participants taking photographs, which are then used as prompts for discussion.

For instance, a researcher studying urban environmental issues might invite participants to photograph areas in their neighborhood that they perceive as environmentally detrimental, and then discuss each photo in depth (Clark-Ibáñez, 2004).

Example of Photo-Elicitation Research

Title: Early adolescent food routines: A photo-elicitation study

Citation: Green, E. M., Spivak, C., & Dollahite, J. S. (2021). Early adolescent food routines: A photo-elicitation study. Appetite, 158 .

Overview: This study focused on early adolescents (ages 10-14) and their food routines. Researchers conducted in-depth interviews using a photo-elicitation approach, where participants took photos related to their food choices and experiences. Through analysis, the study identified various routines and three main themes: family, settings, and meals/foods consumed, revealing how early adolescents view and are influenced by their eating routines.

Features of Qualitative Research

Qualitative research is a research method focused on understanding the meaning individuals or groups attribute to a social or human problem (Creswell, 2013).

Some key features of this method include:

  • Naturalistic Inquiry: Qualitative research happens in the natural setting of the phenomena, aiming to understand “real world” situations (Patton, 2015). This immersion in the field or subject allows the researcher to gather a deep understanding of the subject matter.
  • Emphasis on Process: It aims to understand how events unfold over time rather than focusing solely on outcomes (Merriam & Tisdell, 2015). The process-oriented nature of qualitative research allows researchers to investigate sequences, timing, and changes.
  • Interpretive: It involves interpreting and making sense of phenomena in terms of the meanings people assign to them (Denzin & Lincoln, 2011). This interpretive element allows for rich, nuanced insights into human behavior and experiences.
  • Holistic Perspective: Qualitative research seeks to understand the whole phenomenon rather than focusing on individual components (Creswell, 2013). It emphasizes the complex interplay of factors, providing a richer, more nuanced view of the research subject.
  • Prioritizes Depth over Breadth: Qualitative research favors depth of understanding over breadth, typically involving a smaller but more focused sample size (Hennink, Hutter, & Bailey, 2020). This enables detailed exploration of the phenomena of interest, often leading to rich and complex data.

Qualitative vs Quantitative Research

Qualitative research centers on exploring and understanding the meaning individuals or groups attribute to a social or human problem (Creswell, 2013).

It involves an in-depth approach to the subject matter, aiming to capture the richness and complexity of human experience.

Examples include conducting interviews, observing behaviors, or analyzing text and images.

There are strengths inherent in this approach. In its focus on understanding subjective experiences and interpretations, qualitative research can yield rich and detailed data that quantitative research may overlook (Denzin & Lincoln, 2011).

Additionally, qualitative research is adaptive, allowing the researcher to respond to new directions and insights as they emerge during the research process.

However, there are also limitations. Because of the interpretive nature of this research, findings may not be generalizable to a broader population (Marshall & Rossman, 2014). Well-designed quantitative research, on the other hand, can be generalizable.

Moreover, the reliability and validity of qualitative data can be challenging to establish due to its subjective nature, unlike quantitative research, which is ideally more objective.

Compare Qualitative and Quantitative Research Methodologies in This Guide Here

In conclusion, qualitative research methods provide distinctive ways to explore social phenomena and understand nuances that quantitative approaches might overlook. Each method, from Ethnography to Photo-Elicitation, presents its strengths and weaknesses but they all offer valuable means of investigating complex, real-world situations. The goal for the researcher is not to find a definitive tool, but to employ the method best suited for their research questions and the context at hand (Almalki, 2016). Above all, these methods underscore the richness of human experience and deepen our understanding of the world around us.

Angrosino, M. (2007). Doing ethnographic and observational research. Sage Publications.

Areni, C. S., & Kim, D. (1994). The influence of in-store lighting on consumers’ examination of merchandise in a wine store. International Journal of Research in Marketing, 11 (2), 117-125.

Barker, C., Pistrang, N., & Elliott, R. (2016). Research Methods in Clinical Psychology: An Introduction for Students and Practitioners. John Wiley & Sons.

Baxter, P. & Jack, S. (2008). Qualitative case study methodology: Study design and implementation for novice researchers. The Qualitative Report, 13 (4), 544-559.

Berger, A. A. (2010). The Objects of Affection: Semiotics and Consumer Culture. Palgrave Macmillan.

Bevan, M. T. (2014). A method of phenomenological interviewing. Qualitative health research, 24 (1), 136-144.

Birks, M., & Mills, J. (2015). Grounded theory: A practical guide . Sage Publications.

Bryman, A. (2015) . The SAGE Handbook of Qualitative Research. Sage Publications.

Chandler, D. (2017). Semiotics: The Basics. Routledge.

Charmaz, K. (2014). Constructing grounded theory. Sage Publications.

Cheek, J. (2004). At the margins? Discourse analysis and qualitative research. Qualitative Health Research, 14(8), 1140-1150.

Clark-Ibáñez, M. (2004). Framing the social world with photo-elicitation interviews. American Behavioral Scientist, 47(12), 1507-1527.

Creswell, J. W. (2013). Research Design: Qualitative, Quantitative and Mixed Methods Approaches. Sage Publications.

Creswell, J. W., & Creswell, J. D. (2017). Research design: Qualitative, quantitative, and mixed methods approaches. Sage publications.

Crowe, S., Cresswell, K., Robertson, A., Huby, G., Avery, A., & Sheikh, A. (2011). The case study approach. BMC Medical Research Methodology, 11(100), 1-9.

Denzin, N. K., & Lincoln, Y. S. (2011). The Sage Handbook of Qualitative Research. Sage.

Dewalt, K. M., & Dewalt, B. R. (2011). Participant observation: A guide for fieldworkers. Rowman Altamira.

Doody, O., Slevin, E., & Taggart, L. (2013). Focus group interviews in nursing research: part 1. British Journal of Nursing, 22(1), 16-19.

Durham, A. (2019). Autoethnography. In P. Atkinson (Ed.), Qualitative Research Methods. Oxford University Press.

Duriau, V. J., Reger, R. K., & Pfarrer, M. D. (2007). A content analysis of the content analysis literature in organization studies: Research themes, data sources, and methodological refinements. Organizational Research Methods, 10(1), 5-34.

Evans, J. (2010). The Everyday Lives of Men: An Ethnographic Investigation of Young Adult Male Identity. Peter Lang.

Farrall, S. (2006). What is qualitative longitudinal research? Papers in Social Research Methods, Qualitative Series, No.11, London School of Economics, Methodology Institute.

Fielding, J., & Fielding, N. (2008). Synergy and synthesis: integrating qualitative and quantitative data. The SAGE handbook of social research methods, 555-571.

Fink, A. (2013). How to conduct surveys: A step-by-step guide . SAGE.

Forsyth, D. R. (2010). Group Dynamics . Wadsworth Cengage Learning.

Fugard, A. J. B., & Potts, H. W. W. (2015). Supporting thinking on sample sizes for thematic analyses: A quantitative tool. International Journal of Social Research Methodology, 18 (6), 669–684.

Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. Aldine de Gruyter.

Gray, J. R., Grove, S. K., & Sutherland, S. (2017). Burns and Grove’s the Practice of Nursing Research E-Book: Appraisal, Synthesis, and Generation of Evidence. Elsevier Health Sciences.

Greenwood, D. J., & Levin, M. (2016). Introduction to action research: Social research for social change. SAGE.

Harper, D. (2002). Talking about pictures: A case for photo elicitation. Visual Studies, 17 (1), 13-26.

Heinonen, T. (2012). Making Sense of the Social: Human Sciences and the Narrative Turn. Rozenberg Publishers.

Heisley, D. D., & Levy, S. J. (1991). Autodriving: A photoelicitation technique. Journal of Consumer Research, 18 (3), 257-272.

Hennink, M. M., Hutter, I., & Bailey, A. (2020). Qualitative Research Methods . SAGE Publications Ltd.

Hsieh, H. F., & Shannon, S. E. (2005). Three Approaches to Qualitative Content Analysis. Qualitative Health Research, 15 (9), 1277–1288.

Jorgensen, D. L. (2015). Participant Observation. In Emerging Trends in the Social and Behavioral Sciences: An Interdisciplinary, Searchable, and Linkable Resource. John Wiley & Sons, Inc.

Jorgensen, M., & Phillips, L. (2002). Discourse Analysis as Theory and Method . SAGE.

Josselson, R. (2011). Narrative research: Constructing, deconstructing, and reconstructing story. In Five ways of doing qualitative analysis . Guilford Press.

Kawulich, B. B. (2005). Participant observation as a data collection method. Forum: Qualitative Social Research, 6 (2).

Khan, S. (2014). Qualitative Research Method: Grounded Theory. Journal of Basic and Clinical Pharmacy, 5 (4), 86-88.

Koshy, E., Koshy, V., & Waterman, H. (2010). Action Research in Healthcare . SAGE.

Krippendorff, K. (2013). Content Analysis: An Introduction to its Methodology. SAGE.

Lannon, J., & Cooper, P. (2012). Humanistic Advertising: A Holistic Cultural Perspective. International Journal of Advertising, 15 (2), 97–111.

Lavrakas, P. J. (2008). Encyclopedia of survey research methods. SAGE Publications.

Lieblich, A., Tuval-Mashiach, R., & Zilber, T. (2008). Narrative research: Reading, analysis and interpretation. Sage Publications.

Mackey, A., & Gass, S. M. (2015). Second language research: Methodology and design. Routledge.

Marshall, C., & Rossman, G. B. (2014). Designing qualitative research. Sage publications.

McAdams, D. P., Josselson, R., & Lieblich, A. (2006). Identity and story: Creating self in narrative. American Psychological Association.

Merriam, S. B., & Tisdell, E. J. (2015). Qualitative Research: A Guide to Design and Implementation. Jossey-Bass.

Mick, D. G. (1986). Consumer Research and Semiotics: Exploring the Morphology of Signs, Symbols, and Significance. Journal of Consumer Research, 13 (2), 196-213.

Morgan, D. L. (2010). Focus groups as qualitative research. Sage Publications.

Mulhall, A. (2003). In the field: notes on observation in qualitative research. Journal of Advanced Nursing, 41 (3), 306-313.

Neale, B. (2019). What is Qualitative Longitudinal Research? Bloomsbury Publishing.

Nolan, L. B., & Renderos, T. B. (2012). A focus group study on the influence of fatalism and religiosity on cancer risk perceptions in rural, eastern North Carolina. Journal of religion and health, 51 (1), 91-104.

Padilla-Díaz, M. (2015). Phenomenology in educational qualitative research: Philosophy as science or philosophical science? International Journal of Educational Excellence, 1 (2), 101-110.

Parker, I. (2014). Discourse dynamics: Critical analysis for social and individual psychology . Routledge.

Patton, M. Q. (2015). Qualitative research & evaluation methods: Integrating theory and practice . Sage Publications.

Polkinghorne, D. E. (2013). Narrative configuration in qualitative analysis. In Life history and narrative. Routledge.

Puts, M. T., Tapscott, B., Fitch, M., Howell, D., Monette, J., Wan-Chow-Wah, D., Krzyzanowska, M., Leighl, N. B., Springall, E., & Alibhai, S. (2014). Factors influencing adherence to cancer treatment in older adults with cancer: a systematic review. Annals of oncology, 25 (3), 564-577.

Qu, S. Q., & Dumay, J. (2011). The qualitative research interview . Qualitative research in accounting & management.

Ali, J., & Bhaskar, S. B. (2016). Basic statistical tools in research and data analysis. Indian Journal of Anaesthesia, 60 (9), 662–669.

Rosenbaum, M. S. (2017). Exploring the social supportive role of third places in consumers’ lives. Journal of Service Research, 20 (1), 26-42.

Saldaña, J. (2003). Longitudinal Qualitative Research: Analyzing Change Through Time . AltaMira Press.

Saldaña, J. (2014). The Coding Manual for Qualitative Researchers. SAGE.

Shernoff, D. J., Csikszentmihalyi, M., Shneider, B., & Shernoff, E. S. (2003). Student engagement in high school classrooms from the perspective of flow theory. School Psychology Quarterly, 18 (2), 158-176.

Smith, J. A. (2015). Qualitative Psychology: A Practical Guide to Research Methods . Sage Publications.

Smith, M. K. (2010). Action Research. The encyclopedia of informal education.

Sue, V. M., & Ritter, L. A. (2012). Conducting online surveys . SAGE Publications.

Van Auken, P. M., Frisvoll, S. J., & Stewart, S. I. (2010). Visualising community: using participant-driven photo-elicitation for research and application. Local Environment, 15 (4), 373-388.

Van Voorhis, F. L., & Morgan, B. L. (2007). Understanding Power and Rules of Thumb for Determining Sample Sizes. Tutorials in Quantitative Methods for Psychology, 3 (2), 43–50.

Wodak, R., & Meyer, M. (2015). Methods of Critical Discourse Analysis . SAGE.

Zuber-Skerritt, O. (2018). Action research for developing educational theories and practices . Routledge.


Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

  • Chris Drew (PhD) 102 Examples of Social Norms (List)
  • Chris Drew (PhD) 15 Social Environment Examples
  • Chris Drew (PhD) 15 Selective Perception Examples
  • Chris Drew (PhD) Field Observation (Research Method): Definition and Examples

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2023 Jan-.

Cover of StatPearls

StatPearls [Internet].

Qualitative study.

Steven Tenny ; Janelle M. Brannan ; Grace D. Brannan .


Last Update: September 18, 2022 .

  • Introduction

Qualitative research is a type of research that explores and provides deeper insights into real-world problems. [1] Instead of collecting numerical data points or intervene or introduce treatments just like in quantitative research, qualitative research helps generate hypotheses as well as further investigate and understand quantitative data. Qualitative research gathers participants' experiences, perceptions, and behavior. It answers the hows and whys instead of how many or how much. It could be structured as a stand-alone study, purely relying on qualitative data or it could be part of mixed-methods research that combines qualitative and quantitative data. This review introduces the readers to some basic concepts, definitions, terminology, and application of qualitative research.

Qualitative research at its core, ask open-ended questions whose answers are not easily put into numbers such as ‘how’ and ‘why’. [2] Due to the open-ended nature of the research questions at hand, qualitative research design is often not linear in the same way quantitative design is. [2] One of the strengths of qualitative research is its ability to explain processes and patterns of human behavior that can be difficult to quantify. [3] Phenomena such as experiences, attitudes, and behaviors can be difficult to accurately capture quantitatively, whereas a qualitative approach allows participants themselves to explain how, why, or what they were thinking, feeling, and experiencing at a certain time or during an event of interest. Quantifying qualitative data certainly is possible, but at its core, qualitative data is looking for themes and patterns that can be difficult to quantify and it is important to ensure that the context and narrative of qualitative work are not lost by trying to quantify something that is not meant to be quantified.

However, while qualitative research is sometimes placed in opposition to quantitative research, where they are necessarily opposites and therefore ‘compete’ against each other and the philosophical paradigms associated with each, qualitative and quantitative work are not necessarily opposites nor are they incompatible. [4] While qualitative and quantitative approaches are different, they are not necessarily opposites, and they are certainly not mutually exclusive. For instance, qualitative research can help expand and deepen understanding of data or results obtained from quantitative analysis. For example, say a quantitative analysis has determined that there is a correlation between length of stay and level of patient satisfaction, but why does this correlation exist? This dual-focus scenario shows one way in which qualitative and quantitative research could be integrated together.

Examples of Qualitative Research Approaches


Ethnography as a research design has its origins in social and cultural anthropology, and involves the researcher being directly immersed in the participant’s environment. [2] Through this immersion, the ethnographer can use a variety of data collection techniques with the aim of being able to produce a comprehensive account of the social phenomena that occurred during the research period. [2] That is to say, the researcher’s aim with ethnography is to immerse themselves into the research population and come out of it with accounts of actions, behaviors, events, etc. through the eyes of someone involved in the population. Direct involvement of the researcher with the target population is one benefit of ethnographic research because it can then be possible to find data that is otherwise very difficult to extract and record.

Grounded Theory

Grounded Theory is the “generation of a theoretical model through the experience of observing a study population and developing a comparative analysis of their speech and behavior.” [5] As opposed to quantitative research which is deductive and tests or verifies an existing theory, grounded theory research is inductive and therefore lends itself to research that is aiming to study social interactions or experiences. [3] [2] In essence, Grounded Theory’s goal is to explain for example how and why an event occurs or how and why people might behave a certain way. Through observing the population, a researcher using the Grounded Theory approach can then develop a theory to explain the phenomena of interest.


Phenomenology is defined as the “study of the meaning of phenomena or the study of the particular”. [5] At first glance, it might seem that Grounded Theory and Phenomenology are quite similar, but upon careful examination, the differences can be seen. At its core, phenomenology looks to investigate experiences from the perspective of the individual. [2] Phenomenology is essentially looking into the ‘lived experiences’ of the participants and aims to examine how and why participants behaved a certain way, from their perspective . Herein lies one of the main differences between Grounded Theory and Phenomenology. Grounded Theory aims to develop a theory for social phenomena through an examination of various data sources whereas Phenomenology focuses on describing and explaining an event or phenomena from the perspective of those who have experienced it.

Narrative Research

One of qualitative research’s strengths lies in its ability to tell a story, often from the perspective of those directly involved in it. Reporting on qualitative research involves including details and descriptions of the setting involved and quotes from participants. This detail is called ‘thick’ or ‘rich’ description and is a strength of qualitative research. Narrative research is rife with the possibilities of ‘thick’ description as this approach weaves together a sequence of events, usually from just one or two individuals, in the hopes of creating a cohesive story, or narrative. [2] While it might seem like a waste of time to focus on such a specific, individual level, understanding one or two people’s narratives for an event or phenomenon can help to inform researchers about the influences that helped shape that narrative. The tension or conflict of differing narratives can be “opportunities for innovation”. [2]

Research Paradigm

Research paradigms are the assumptions, norms, and standards that underpin different approaches to research. Essentially, research paradigms are the ‘worldview’ that inform research. [4] It is valuable for researchers, both qualitative and quantitative, to understand what paradigm they are working within because understanding the theoretical basis of research paradigms allows researchers to understand the strengths and weaknesses of the approach being used and adjust accordingly. Different paradigms have different ontology and epistemologies . Ontology is defined as the "assumptions about the nature of reality” whereas epistemology is defined as the “assumptions about the nature of knowledge” that inform the work researchers do. [2] It is important to understand the ontological and epistemological foundations of the research paradigm researchers are working within to allow for a full understanding of the approach being used and the assumptions that underpin the approach as a whole. Further, it is crucial that researchers understand their own ontological and epistemological assumptions about the world in general because their assumptions about the world will necessarily impact how they interact with research. A discussion of the research paradigm is not complete without describing positivist, postpositivist, and constructivist philosophies.

Positivist vs Postpositivist

To further understand qualitative research, we need to discuss positivist and postpositivist frameworks. Positivism is a philosophy that the scientific method can and should be applied to social as well as natural sciences. [4] Essentially, positivist thinking insists that the social sciences should use natural science methods in its research which stems from positivist ontology that there is an objective reality that exists that is fully independent of our perception of the world as individuals. Quantitative research is rooted in positivist philosophy, which can be seen in the value it places on concepts such as causality, generalizability, and replicability.

Conversely, postpositivists argue that social reality can never be one hundred percent explained but it could be approximated. [4] Indeed, qualitative researchers have been insisting that there are “fundamental limits to the extent to which the methods and procedures of the natural sciences could be applied to the social world” and therefore postpositivist philosophy is often associated with qualitative research. [4] An example of positivist versus postpositivist values in research might be that positivist philosophies value hypothesis-testing, whereas postpositivist philosophies value the ability to formulate a substantive theory.


Constructivism is a subcategory of postpositivism. Most researchers invested in postpositivist research are constructivist as well, meaning they think there is no objective external reality that exists but rather that reality is constructed. Constructivism is a theoretical lens that emphasizes the dynamic nature of our world. “Constructivism contends that individuals’ views are directly influenced by their experiences, and it is these individual experiences and views that shape their perspective of reality”. [6] Essentially, Constructivist thought focuses on how ‘reality’ is not a fixed certainty and experiences, interactions, and backgrounds give people a unique view of the world. Constructivism contends, unlike in positivist views, that there is not necessarily an ‘objective’ reality we all experience. This is the ‘relativist’ ontological view that reality and the world we live in are dynamic and socially constructed. Therefore, qualitative scientific knowledge can be inductive as well as deductive.” [4]

So why is it important to understand the differences in assumptions that different philosophies and approaches to research have? Fundamentally, the assumptions underpinning the research tools a researcher selects provide an overall base for the assumptions the rest of the research will have and can even change the role of the researcher themselves. [2] For example, is the researcher an ‘objective’ observer such as in positivist quantitative work? Or is the researcher an active participant in the research itself, as in postpositivist qualitative work? Understanding the philosophical base of the research undertaken allows researchers to fully understand the implications of their work and their role within the research, as well as reflect on their own positionality and bias as it pertains to the research they are conducting.

Data Sampling 

The better the sample represents the intended study population, the more likely the researcher is to encompass the varying factors at play. The following are examples of participant sampling and selection: [7]

  • Purposive sampling- selection based on the researcher’s rationale in terms of being the most informative.
  • Criterion sampling-selection based on pre-identified factors.
  • Convenience sampling- selection based on availability.
  • Snowball sampling- the selection is by referral from other participants or people who know potential participants.
  • Extreme case sampling- targeted selection of rare cases.
  • Typical case sampling-selection based on regular or average participants. 

Data Collection and Analysis

Qualitative research uses several techniques including interviews, focus groups, and observation. [1] [2] [3] Interviews may be unstructured, with open-ended questions on a topic and the interviewer adapts to the responses. Structured interviews have a predetermined number of questions that every participant is asked. It is usually one on one and is appropriate for sensitive topics or topics needing an in-depth exploration. Focus groups are often held with 8-12 target participants and are used when group dynamics and collective views on a topic are desired. Researchers can be a participant-observer to share the experiences of the subject or a non-participant or detached observer.

While quantitative research design prescribes a controlled environment for data collection, qualitative data collection may be in a central location or in the environment of the participants, depending on the study goals and design. Qualitative research could amount to a large amount of data. Data is transcribed which may then be coded manually or with the use of Computer Assisted Qualitative Data Analysis Software or CAQDAS such as ATLAS.ti or NVivo. [8] [9] [10]

After the coding process, qualitative research results could be in various formats. It could be a synthesis and interpretation presented with excerpts from the data. [11] Results also could be in the form of themes and theory or model development.


To standardize and facilitate the dissemination of qualitative research outcomes, the healthcare team can use two reporting standards. The Consolidated Criteria for Reporting Qualitative Research or COREQ is a 32-item checklist for interviews and focus groups. [12] The Standards for Reporting Qualitative Research (SRQR) is a checklist covering a wider range of qualitative research. [13]

Examples of Application

Many times a research question will start with qualitative research. The qualitative research will help generate the research hypothesis which can be tested with quantitative methods. After the data is collected and analyzed with quantitative methods, a set of qualitative methods can be used to dive deeper into the data for a better understanding of what the numbers truly mean and their implications. The qualitative methods can then help clarify the quantitative data and also help refine the hypothesis for future research. Furthermore, with qualitative research researchers can explore subjects that are poorly studied with quantitative methods. These include opinions, individual's actions, and social science research.

A good qualitative study design starts with a goal or objective. This should be clearly defined or stated. The target population needs to be specified. A method for obtaining information from the study population must be carefully detailed to ensure there are no omissions of part of the target population. A proper collection method should be selected which will help obtain the desired information without overly limiting the collected data because many times, the information sought is not well compartmentalized or obtained. Finally, the design should ensure adequate methods for analyzing the data. An example may help better clarify some of the various aspects of qualitative research.

A researcher wants to decrease the number of teenagers who smoke in their community. The researcher could begin by asking current teen smokers why they started smoking through structured or unstructured interviews (qualitative research). The researcher can also get together a group of current teenage smokers and conduct a focus group to help brainstorm factors that may have prevented them from starting to smoke (qualitative research).

In this example, the researcher has used qualitative research methods (interviews and focus groups) to generate a list of ideas of both why teens start to smoke as well as factors that may have prevented them from starting to smoke. Next, the researcher compiles this data. The research found that, hypothetically, peer pressure, health issues, cost, being considered “cool,” and rebellious behavior all might increase or decrease the likelihood of teens starting to smoke.

The researcher creates a survey asking teen participants to rank how important each of the above factors is in either starting smoking (for current smokers) or not smoking (for current non-smokers). This survey provides specific numbers (ranked importance of each factor) and is thus a quantitative research tool.

The researcher can use the results of the survey to focus efforts on the one or two highest-ranked factors. Let us say the researcher found that health was the major factor that keeps teens from starting to smoke, and peer pressure was the major factor that contributed to teens to start smoking. The researcher can go back to qualitative research methods to dive deeper into each of these for more information. The researcher wants to focus on how to keep teens from starting to smoke, so they focus on the peer pressure aspect.

The researcher can conduct interviews and/or focus groups (qualitative research) about what types and forms of peer pressure are commonly encountered, where the peer pressure comes from, and where smoking first starts. The researcher hypothetically finds that peer pressure often occurs after school at the local teen hangouts, mostly the local park. The researcher also hypothetically finds that peer pressure comes from older, current smokers who provide the cigarettes.

The researcher could further explore this observation made at the local teen hangouts (qualitative research) and take notes regarding who is smoking, who is not, and what observable factors are at play for peer pressure of smoking. The researcher finds a local park where many local teenagers hang out and see that a shady, overgrown area of the park is where the smokers tend to hang out. The researcher notes the smoking teenagers buy their cigarettes from a local convenience store adjacent to the park where the clerk does not check identification before selling cigarettes. These observations fall under qualitative research.

If the researcher returns to the park and counts how many individuals smoke in each region of the park, this numerical data would be quantitative research. Based on the researcher's efforts thus far, they conclude that local teen smoking and teenagers who start to smoke may decrease if there are fewer overgrown areas of the park and the local convenience store does not sell cigarettes to underage individuals.

The researcher could try to have the parks department reassess the shady areas to make them less conducive to the smokers or identify how to limit the sales of cigarettes to underage individuals by the convenience store. The researcher would then cycle back to qualitative methods of asking at-risk population their perceptions of the changes, what factors are still at play, as well as quantitative research that includes teen smoking rates in the community, the incidence of new teen smokers, among others. [14] [15]

Qualitative research functions as a standalone research design or in combination with quantitative research to enhance our understanding of the world. Qualitative research uses techniques including structured and unstructured interviews, focus groups, and participant observation to not only help generate hypotheses which can be more rigorously tested with quantitative research but also to help researchers delve deeper into the quantitative research numbers, understand what they mean, and understand what the implications are.  Qualitative research provides researchers with a way to understand what is going on, especially when things are not easily categorized. [16]

  • Issues of Concern

As discussed in the sections above, quantitative and qualitative work differ in many different ways, including the criteria for evaluating them. There are four well-established criteria for evaluating quantitative data: internal validity, external validity, reliability, and objectivity. The correlating concepts in qualitative research are credibility, transferability, dependability, and confirmability. [4] [11] The corresponding quantitative and qualitative concepts can be seen below, with the quantitative concept is on the left, and the qualitative concept is on the right:

  • Internal validity--- Credibility
  • External validity---Transferability
  • Reliability---Dependability
  • Objectivity---Confirmability

In conducting qualitative research, ensuring these concepts are satisfied and well thought out can mitigate potential issues from arising. For example, just as a researcher will ensure that their quantitative study is internally valid so should qualitative researchers ensure that their work has credibility.  

Indicators such as triangulation and peer examination can help evaluate the credibility of qualitative work.

  • Triangulation: Triangulation involves using multiple methods of data collection to increase the likelihood of getting a reliable and accurate result. In our above magic example, the result would be more reliable by also interviewing the magician, back-stage hand, and the person who "vanished." In qualitative research, triangulation can include using telephone surveys, in-person surveys, focus groups, and interviews as well as surveying an adequate cross-section of the target demographic.
  • Peer examination: Results can be reviewed by a peer to ensure the data is consistent with the findings.

‘Thick’ or ‘rich’ description can be used to evaluate the transferability of qualitative research whereas using an indicator such as an audit trail might help with evaluating the dependability and confirmability.

  • Thick or rich description is a detailed and thorough description of details, the setting, and quotes from participants in the research. [5] Thick descriptions will include a detailed explanation of how the study was carried out. Thick descriptions are detailed enough to allow readers to draw conclusions and interpret the data themselves, which can help with transferability and replicability.
  • Audit trail: An audit trail provides a documented set of steps of how the participants were selected and the data was collected. The original records of information should also be kept (e.g., surveys, notes, recordings).

One issue of concern that qualitative researchers should take into consideration is observation bias. Here are a few examples:

  • Hawthorne effect: The Hawthorne effect is the change in participant behavior when they know they are being observed. If a researcher was wanting to identify factors that contribute to employee theft and tells the employees they are going to watch them to see what factors affect employee theft, one would suspect employee behavior would change when they know they are being watched.
  • Observer-expectancy effect: Some participants change their behavior or responses to satisfy the researcher's desired effect. This happens in an unconscious manner for the participant so it is important to eliminate or limit transmitting the researcher's views.
  • Artificial scenario effect: Some qualitative research occurs in artificial scenarios and/or with preset goals. In such situations, the information may not be accurate because of the artificial nature of the scenario. The preset goals may limit the qualitative information obtained.
  • Clinical Significance

Qualitative research by itself or combined with quantitative research helps healthcare providers understand patients and the impact and challenges of the care they deliver. Qualitative research provides an opportunity to generate and refine hypotheses and delve deeper into the data generated by quantitative research. Qualitative research does not exist as an island apart from quantitative research, but as an integral part of research methods to be used for the understanding of the world around us. [17]

  • Enhancing Healthcare Team Outcomes

Qualitative research is important for all members of the health care team as all are affected by qualitative research. Qualitative research may help develop a theory or a model for health research that can be further explored by quantitative research.  Much of the qualitative research data acquisition is completed by numerous team members including social works, scientists, nurses, etc.  Within each area of the medical field, there is copious ongoing qualitative research including physician-patient interactions, nursing-patient interactions, patient-environment interactions, health care team function, patient information delivery, etc. 

  • Review Questions
  • Access free multiple choice questions on this topic.
  • Comment on this article.

Disclosure: Steven Tenny declares no relevant financial relationships with ineligible companies.

Disclosure: Janelle Brannan declares no relevant financial relationships with ineligible companies.

Disclosure: Grace Brannan declares no relevant financial relationships with ineligible companies.

This book is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) ( ), which permits others to distribute the work, provided that the article is not altered or used commercially. You are not required to obtain permission to distribute this article, provided that you credit the author and journal.

  • Cite this Page Tenny S, Brannan JM, Brannan GD. Qualitative Study. [Updated 2022 Sep 18]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2023 Jan-.

In this Page

Bulk download.

  • Bulk download StatPearls data from FTP

Related information

  • PMC PubMed Central citations
  • PubMed Links to PubMed

Similar articles in PubMed

  • Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas. [Cochrane Database Syst Rev. 2022] Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas. Crider K, Williams J, Qi YP, Gutman J, Yeung L, Mai C, Finkelstain J, Mehta S, Pons-Duran C, Menéndez C, et al. Cochrane Database Syst Rev. 2022 Feb 1; 2(2022). Epub 2022 Feb 1.
  • Macromolecular crowding: chemistry and physics meet biology (Ascona, Switzerland, 10-14 June 2012). [Phys Biol. 2013] Macromolecular crowding: chemistry and physics meet biology (Ascona, Switzerland, 10-14 June 2012). Foffi G, Pastore A, Piazza F, Temussi PA. Phys Biol. 2013 Aug; 10(4):040301. Epub 2013 Aug 2.
  • The future of Cochrane Neonatal. [Early Hum Dev. 2020] The future of Cochrane Neonatal. Soll RF, Ovelman C, McGuire W. Early Hum Dev. 2020 Nov; 150:105191. Epub 2020 Sep 12.
  • Review Evidence Brief: The Effectiveness Of Mandatory Computer-Based Trainings On Government Ethics, Workplace Harassment, Or Privacy And Information Security-Related Topics [ 2014] Review Evidence Brief: The Effectiveness Of Mandatory Computer-Based Trainings On Government Ethics, Workplace Harassment, Or Privacy And Information Security-Related Topics Peterson K, McCleery E. 2014 May
  • Review Public sector reforms and their impact on the level of corruption: A systematic review. [Campbell Syst Rev. 2021] Review Public sector reforms and their impact on the level of corruption: A systematic review. Mugellini G, Della Bella S, Colagrossi M, Isenring GL, Killias M. Campbell Syst Rev. 2021 Jun; 17(2):e1173. Epub 2021 May 24.

Recent Activity

  • Qualitative Study - StatPearls Qualitative Study - StatPearls

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers


  • Free Samples
  • Premium Essays
  • Editing Services Editing Proofreading Rewriting
  • Extra Tools Essay Topic Generator Thesis Generator Citation Generator GPA Calculator Study Guides Donate Paper
  • Essay Writing Help
  • About Us About Us Testimonials FAQ
  • Studentshare
  • Qualitative Research Designs

Qualitative Designs - Research Paper Example

Qualitative Research Designs

  • Subject: Nursing
  • Type: Research Paper
  • Level: College
  • Pages: 2 (500 words)
  • Downloads: 5
  • Author: baileydamian

Extract of sample "Qualitative Designs"

Section/# Antidepressants and Suicide For purposes of choosing a journal article that was emblematic of a quantitative research method, this author has settled upon one which sought to answer key questions relating to antidepressants and suicide entitled, “Suicide and Antidepressants What Current Evidence Indicates”. Whereas many questions can be answered by differing manners of inquiry, certain medical research questions must utilize quantitative information as a way to inform the reader/researcher as to the broad perspective, key trends, and verifiable extant information that exists regarding a given topic.

As a way of seeking to provide an example of just such a piece, this brief analysis will consider the given article, analyze it for its utilization of quantitative techniques, and seek to detail the type and manner of the content which was portrayed therein. The research question itself clearly has to do with the level of linkages that may be illustrated between the use of antidepression medication and an increased likelihood of suicide; something that has gained widespread media attention in the past several years.

With reference to the way that the research itself was actually set up and carried out, the authors of the piece sought to set up a series of test which sought to measure the epidemiological affect that antidepressants had on the patient; up to and including what the researchers deemed as “suicidabiliy”. Controls and test groups were established, antidepressant medication administered and trials concentric upon interviews and close monitoring in the form of diary and journal approaches to the frames of mind that the patients expressed during the period were employed.

In terms of what was hypothesized, the researchers believed that there would be a small, perhaps unrecognizable increase, of suicidal thoughts within the adult patients and a larger, but still statistically small increase in suicidal thoughts within the adolescent test subjects. The study ran a series of 26 different trials of no less than 15 individuals participating in each trial. For further purposes of the study, adults were identified as over the age of 18 with children being represented from the age of 9-18.

As the researchers expected, the rise in suicidal thoughts and motivations within the patients that fell into the category of “adult” were no different than that of the control group. In other words, no noticeable rise in suicidal thoughts or intentions was determined from those patients that were over the age of 18 that participated within the study. However, the researchers found that the level of suicidal thoughts and overall “suicidability” that was exhibited within the group which could be considered children was higher than expected.

Although statistically small, the results point and reinforced the fact that the overall risk to children taking antidepressants is higher than previously thought and a very real and measurable externality of pharmacology.The given study was necessary to be performed in a quantitative way as any other form of analysis or measurement would not have yielded the same results with respect to what degree and extent the patients analyzed exhibited suicidability. Although both quantitative and qualitative research both have their place within the field of medicine and scholastic research, the use of quantities and statistics to gain a level of inference on the way that certain factors affect other factors is a necessity of modern medical research and required in many different instances.

ReferenceNischal, A., Tripathi, A., Nischal, A., & Trivedi, J. K. (2012). Suicide and Antidepressants: What Current Evidence Indicates. Mens Sana Monographs, 10(1), 33-44. doi:10.4103/0973-1229.87287

  • Cited: 0 times
  • Copy Citation Citation is copied Copy Citation Citation is copied Copy Citation Citation is copied

CHECK THESE SAMPLES OF Qualitative Research Designs

The basic needs of the job and education needed to perform the job with the help of job analysis, design-build and design-bid-build contracts: their advantages and disadvantages, how does emotional design can stimulates people's attitudes on a web site, analysis of the article about the students writing abilities, phenomenology approach, grounded theory approach, the crime rate in the united states: criminal behavior, the impact of social networks on education.

example of research design in qualitative research paper


Grad Coach

Research Design 101

Everything You Need To Get Started (With Examples)

By: Derek Jansen (MBA) | Reviewers: Eunice Rautenbach (DTech) & Kerryn Warren (PhD) | April 2023

Research design for qualitative and quantitative studies

Navigating the world of research can be daunting, especially if you’re a first-time researcher. One concept you’re bound to run into fairly early in your research journey is that of “ research design ”. Here, we’ll guide you through the basics using practical examples , so that you can approach your research with confidence.

Overview: Research Design 101

What is research design.

  • Research design types for quantitative studies
  • Video explainer : quantitative research design
  • Research design types for qualitative studies
  • Video explainer : qualitative research design
  • How to choose a research design
  • Key takeaways

Research design refers to the overall plan, structure or strategy that guides a research project , from its conception to the final data analysis. A good research design serves as the blueprint for how you, as the researcher, will collect and analyse data while ensuring consistency, reliability and validity throughout your study.

Understanding different types of research designs is essential as helps ensure that your approach is suitable  given your research aims, objectives and questions , as well as the resources you have available to you. Without a clear big-picture view of how you’ll design your research, you run the risk of potentially making misaligned choices in terms of your methodology – especially your sampling , data collection and data analysis decisions.

The problem with defining research design…

One of the reasons students struggle with a clear definition of research design is because the term is used very loosely across the internet, and even within academia.

Some sources claim that the three research design types are qualitative, quantitative and mixed methods , which isn’t quite accurate (these just refer to the type of data that you’ll collect and analyse). Other sources state that research design refers to the sum of all your design choices, suggesting it’s more like a research methodology . Others run off on other less common tangents. No wonder there’s confusion!

In this article, we’ll clear up the confusion. We’ll explain the most common research design types for both qualitative and quantitative research projects, whether that is for a full dissertation or thesis, or a smaller research paper or article.

Free Webinar: Research Methodology 101

Research Design: Quantitative Studies

Quantitative research involves collecting and analysing data in a numerical form. Broadly speaking, there are four types of quantitative research designs: descriptive , correlational , experimental , and quasi-experimental . 

Descriptive Research Design

As the name suggests, descriptive research design focuses on describing existing conditions, behaviours, or characteristics by systematically gathering information without manipulating any variables. In other words, there is no intervention on the researcher’s part – only data collection.

For example, if you’re studying smartphone addiction among adolescents in your community, you could deploy a survey to a sample of teens asking them to rate their agreement with certain statements that relate to smartphone addiction. The collected data would then provide insight regarding how widespread the issue may be – in other words, it would describe the situation.

The key defining attribute of this type of research design is that it purely describes the situation . In other words, descriptive research design does not explore potential relationships between different variables or the causes that may underlie those relationships. Therefore, descriptive research is useful for generating insight into a research problem by describing its characteristics . By doing so, it can provide valuable insights and is often used as a precursor to other research design types.

Correlational Research Design

Correlational design is a popular choice for researchers aiming to identify and measure the relationship between two or more variables without manipulating them . In other words, this type of research design is useful when you want to know whether a change in one thing tends to be accompanied by a change in another thing.

For example, if you wanted to explore the relationship between exercise frequency and overall health, you could use a correlational design to help you achieve this. In this case, you might gather data on participants’ exercise habits, as well as records of their health indicators like blood pressure, heart rate, or body mass index. Thereafter, you’d use a statistical test to assess whether there’s a relationship between the two variables (exercise frequency and health).

As you can see, correlational research design is useful when you want to explore potential relationships between variables that cannot be manipulated or controlled for ethical, practical, or logistical reasons. It is particularly helpful in terms of developing predictions , and given that it doesn’t involve the manipulation of variables, it can be implemented at a large scale more easily than experimental designs (which will look at next).

That said, it’s important to keep in mind that correlational research design has limitations – most notably that it cannot be used to establish causality . In other words, correlation does not equal causation . To establish causality, you’ll need to move into the realm of experimental design, coming up next…

Need a helping hand?

example of research design in qualitative research paper

Experimental Research Design

Experimental research design is used to determine if there is a causal relationship between two or more variables . With this type of research design, you, as the researcher, manipulate one variable (the independent variable) while controlling others (dependent variables). Doing so allows you to observe the effect of the former on the latter and draw conclusions about potential causality.

For example, if you wanted to measure if/how different types of fertiliser affect plant growth, you could set up several groups of plants, with each group receiving a different type of fertiliser, as well as one with no fertiliser at all. You could then measure how much each plant group grew (on average) over time and compare the results from the different groups to see which fertiliser was most effective.

Overall, experimental research design provides researchers with a powerful way to identify and measure causal relationships (and the direction of causality) between variables. However, developing a rigorous experimental design can be challenging as it’s not always easy to control all the variables in a study. This often results in smaller sample sizes , which can reduce the statistical power and generalisability of the results.

Moreover, experimental research design requires random assignment . This means that the researcher needs to assign participants to different groups or conditions in a way that each participant has an equal chance of being assigned to any group (note that this is not the same as random sampling ). Doing so helps reduce the potential for bias and confounding variables . This need for random assignment can lead to ethics-related issues . For example, withholding a potentially beneficial medical treatment from a control group may be considered unethical in certain situations.

Quasi-Experimental Research Design

Quasi-experimental research design is used when the research aims involve identifying causal relations , but one cannot (or doesn’t want to) randomly assign participants to different groups (for practical or ethical reasons). Instead, with a quasi-experimental research design, the researcher relies on existing groups or pre-existing conditions to form groups for comparison.

For example, if you were studying the effects of a new teaching method on student achievement in a particular school district, you may be unable to randomly assign students to either group and instead have to choose classes or schools that already use different teaching methods. This way, you still achieve separate groups, without having to assign participants to specific groups yourself.

Naturally, quasi-experimental research designs have limitations when compared to experimental designs. Given that participant assignment is not random, it’s more difficult to confidently establish causality between variables, and, as a researcher, you have less control over other variables that may impact findings.

All that said, quasi-experimental designs can still be valuable in research contexts where random assignment is not possible and can often be undertaken on a much larger scale than experimental research, thus increasing the statistical power of the results. What’s important is that you, as the researcher, understand the limitations of the design and conduct your quasi-experiment as rigorously as possible, paying careful attention to any potential confounding variables .

The four most common quantitative research design types are descriptive, correlational, experimental and quasi-experimental.

Research Design: Qualitative Studies

There are many different research design types when it comes to qualitative studies, but here we’ll narrow our focus to explore the “Big 4”. Specifically, we’ll look at phenomenological design, grounded theory design, ethnographic design, and case study design.

Phenomenological Research Design

Phenomenological design involves exploring the meaning of lived experiences and how they are perceived by individuals. This type of research design seeks to understand people’s perspectives , emotions, and behaviours in specific situations. Here, the aim for researchers is to uncover the essence of human experience without making any assumptions or imposing preconceived ideas on their subjects.

For example, you could adopt a phenomenological design to study why cancer survivors have such varied perceptions of their lives after overcoming their disease. This could be achieved by interviewing survivors and then analysing the data using a qualitative analysis method such as thematic analysis to identify commonalities and differences.

Phenomenological research design typically involves in-depth interviews or open-ended questionnaires to collect rich, detailed data about participants’ subjective experiences. This richness is one of the key strengths of phenomenological research design but, naturally, it also has limitations. These include potential biases in data collection and interpretation and the lack of generalisability of findings to broader populations.

Grounded Theory Research Design

Grounded theory (also referred to as “GT”) aims to develop theories by continuously and iteratively analysing and comparing data collected from a relatively large number of participants in a study. It takes an inductive (bottom-up) approach, with a focus on letting the data “speak for itself”, without being influenced by preexisting theories or the researcher’s preconceptions.

As an example, let’s assume your research aims involved understanding how people cope with chronic pain from a specific medical condition, with a view to developing a theory around this. In this case, grounded theory design would allow you to explore this concept thoroughly without preconceptions about what coping mechanisms might exist. You may find that some patients prefer cognitive-behavioural therapy (CBT) while others prefer to rely on herbal remedies. Based on multiple, iterative rounds of analysis, you could then develop a theory in this regard, derived directly from the data (as opposed to other preexisting theories and models).

Grounded theory typically involves collecting data through interviews or observations and then analysing it to identify patterns and themes that emerge from the data. These emerging ideas are then validated by collecting more data until a saturation point is reached (i.e., no new information can be squeezed from the data). From that base, a theory can then be developed .

As you can see, grounded theory is ideally suited to studies where the research aims involve theory generation , especially in under-researched areas. Keep in mind though that this type of research design can be quite time-intensive , given the need for multiple rounds of data collection and analysis.

example of research design in qualitative research paper

Ethnographic Research Design

Ethnographic design involves observing and studying a culture-sharing group of people in their natural setting to gain insight into their behaviours, beliefs, and values. The focus here is on observing participants in their natural environment (as opposed to a controlled environment). This typically involves the researcher spending an extended period of time with the participants in their environment, carefully observing and taking field notes .

All of this is not to say that ethnographic research design relies purely on observation. On the contrary, this design typically also involves in-depth interviews to explore participants’ views, beliefs, etc. However, unobtrusive observation is a core component of the ethnographic approach.

As an example, an ethnographer may study how different communities celebrate traditional festivals or how individuals from different generations interact with technology differently. This may involve a lengthy period of observation, combined with in-depth interviews to further explore specific areas of interest that emerge as a result of the observations that the researcher has made.

As you can probably imagine, ethnographic research design has the ability to provide rich, contextually embedded insights into the socio-cultural dynamics of human behaviour within a natural, uncontrived setting. Naturally, however, it does come with its own set of challenges, including researcher bias (since the researcher can become quite immersed in the group), participant confidentiality and, predictably, ethical complexities . All of these need to be carefully managed if you choose to adopt this type of research design.

Case Study Design

With case study research design, you, as the researcher, investigate a single individual (or a single group of individuals) to gain an in-depth understanding of their experiences, behaviours or outcomes. Unlike other research designs that are aimed at larger sample sizes, case studies offer a deep dive into the specific circumstances surrounding a person, group of people, event or phenomenon, generally within a bounded setting or context .

As an example, a case study design could be used to explore the factors influencing the success of a specific small business. This would involve diving deeply into the organisation to explore and understand what makes it tick – from marketing to HR to finance. In terms of data collection, this could include interviews with staff and management, review of policy documents and financial statements, surveying customers, etc.

While the above example is focused squarely on one organisation, it’s worth noting that case study research designs can have different variation s, including single-case, multiple-case and longitudinal designs. As you can see in the example, a single-case design involves intensely examining a single entity to understand its unique characteristics and complexities. Conversely, in a multiple-case design , multiple cases are compared and contrasted to identify patterns and commonalities. Lastly, in a longitudinal case design , a single case or multiple cases are studied over an extended period of time to understand how factors develop over time.

As you can see, a case study research design is particularly useful where a deep and contextualised understanding of a specific phenomenon or issue is desired. However, this strength is also its weakness. In other words, you can’t generalise the findings from a case study to the broader population. So, keep this in mind if you’re considering going the case study route.

Case study design often involves investigating an individual to gain an in-depth understanding of their experiences, behaviours or outcomes.

How To Choose A Research Design

Having worked through all of these potential research designs, you’d be forgiven for feeling a little overwhelmed and wondering, “ But how do I decide which research design to use? ”. While we could write an entire post covering that alone, here are a few factors to consider that will help you choose a suitable research design for your study.

Data type: The first determining factor is naturally the type of data you plan to be collecting – i.e., qualitative or quantitative. This may sound obvious, but we have to be clear about this – don’t try to use a quantitative research design on qualitative data (or vice versa)!

Research aim(s) and question(s): As with all methodological decisions, your research aim and research questions will heavily influence your research design. For example, if your research aims involve developing a theory from qualitative data, grounded theory would be a strong option. Similarly, if your research aims involve identifying and measuring relationships between variables, one of the experimental designs would likely be a better option.

Time: It’s essential that you consider any time constraints you have, as this will impact the type of research design you can choose. For example, if you’ve only got a month to complete your project, a lengthy design such as ethnography wouldn’t be a good fit.

Resources: Take into account the resources realistically available to you, as these need to factor into your research design choice. For example, if you require highly specialised lab equipment to execute an experimental design, you need to be sure that you’ll have access to that before you make a decision.

Keep in mind that when it comes to research, it’s important to manage your risks and play as conservatively as possible. If your entire project relies on you achieving a huge sample, having access to niche equipment or holding interviews with very difficult-to-reach participants, you’re creating risks that could kill your project. So, be sure to think through your choices carefully and make sure that you have backup plans for any existential risks. Remember that a relatively simple methodology executed well generally will typically earn better marks than a highly-complex methodology executed poorly.

example of research design in qualitative research paper

Recap: Key Takeaways

We’ve covered a lot of ground here. Let’s recap by looking at the key takeaways:

  • Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data.
  • Research designs for quantitative studies include descriptive , correlational , experimental and quasi-experimenta l designs.
  • Research designs for qualitative studies include phenomenological , grounded theory , ethnographic and case study designs.
  • When choosing a research design, you need to consider a variety of factors, including the type of data you’ll be working with, your research aims and questions, your time and the resources available to you.

If you need a helping hand with your research design (or any other aspect of your research), check out our private coaching services .

example of research design in qualitative research paper

Psst… there’s more (for free)

This post is part of our dissertation mini-course, which covers everything you need to get started with your dissertation, thesis or research project. 

You Might Also Like:

Quant data analysis methods 101

Is there any blog article explaining more on Case study research design? Is there a Case study write-up template? Thank you.

Solly Khan

Thanks this was quite valuable to clarify such an important concept.


Thanks for this simplified explanations. it is quite very helpful.


This was really helpful. thanks


Thank you for your explanation. I think case study research design and the use of secondary data in researches needs to be talked about more in your videos and articles because there a lot of case studies research design tailored projects out there.

Please is there any template for a case study research design whose data type is a secondary data on your repository?

Sam Msongole

This post is very clear, comprehensive and has been very helpful to me. It has cleared the confusion I had in regard to research design and methodology.

Robyn Pritchard

This post is helpful, easy to understand, and deconstructs what a research design is. Thanks


how to cite this page

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

example of research design in qualitative research paper

  • Print Friendly
  • Open access
  • Published: 23 October 2023

Eight characteristics of rigorous multilevel implementation research: a step-by-step guide

  • Rebecca Lengnick-Hall   ORCID: 1   na1 ,
  • Nathaniel J. Williams 2   na1 ,
  • Mark G. Ehrhart 3 ,
  • Cathleen E. Willging 4 ,
  • Alicia C. Bunger 5 ,
  • Rinad S. Beidas 6 &
  • Gregory A. Aarons 7  

Implementation Science volume  18 , Article number:  52 ( 2023 ) Cite this article

2230 Accesses

24 Altmetric

Metrics details

Although healthcare is delivered in inherently multilevel contexts, implementation science has no widely endorsed methodological standards defining the characteristics of rigorous, multilevel implementation research. We identify and describe eight characteristics of high-quality, multilevel implementation research to encourage discussion, spur debate, and guide decision-making around study design and methodological issues.


Implementation researchers who conduct rigorous multilevel implementation research demonstrate the following eight characteristics. First, they map and operationalize the specific multilevel context for defined populations and settings. Second, they define and state the level of each construct under study. Third, they describe how constructs relate to each other within and across levels. Fourth, they specify the temporal scope of each phenomenon at each relevant level. Fifth, they align measurement choices and construction of analytic variables with the levels of theories selected (and hypotheses generated, if applicable). Sixth, they use a sampling strategy consistent with the selected theories or research objectives and sufficiently large and variable to examine relationships at requisite levels. Seventh, they align analytic approaches with the chosen theories (and hypotheses, if applicable), ensuring that they account for measurement dependencies and nested data structures. Eighth, they ensure inferences are made at the appropriate level. To guide implementation researchers and encourage debate, we present the rationale for each characteristic, actionable recommendations for operationalizing the characteristics in implementation research, a range of examples, and references to make the characteristics more usable. Our recommendations apply to all types of multilevel implementation study designs and approaches, including randomized trials, quantitative and qualitative observational studies, and mixed methods.

These eight characteristics provide benchmarks for evaluating the quality and replicability of multilevel implementation research and promote a common language and reference points. This, in turn, facilitates knowledge generation across diverse multilevel settings and ensures that implementation research is consistent with (and appropriately leverages) what has already been learned in allied multilevel sciences. When a shared and integrated description of what constitutes rigor is defined and broadly communicated, implementation science is better positioned to innovate both methodologically and theoretically.

Peer Review reports

Contributions to the literature

Awareness of what constitutes rigorous multilevel implementation research is essential for theory generation and refinement across the diverse contexts in which implementation research is conducted.

The methodological standards explained and recommended here are critical for planning, evaluating, and replicating multilevel implementation research.

This manuscript articulates eight characteristics of rigorous, high-quality multilevel implementation research and provides prompts, topic-specific references, and implementation examples to help readers incorporate these ideas into their studies.

Rigorous implementation science requires transparent acknowledgment and skillful incorporation of the context within which implementation occurs. For implementation researchers, this requirement means addressing the inherently multilevel contexts within which healthcare is delivered. Patients who access healthcare are typically nested within one or more individual providers who deliver care (we use the term “providers” inclusively to encompass clinicians, practitioners, and others involved in health service delivery). Individual providers often work in one or more teams, clinics, or other subunits of organizations. Organizations, in turn, are typically embedded within one or more broader communities, networks, and systems. If the goal of implementation science is to improve patient and public health through “the study of methods to promote the systematic uptake of research findings and evidence-based practices (EBPs) into routine healthcare delivery” [ 1 ], we believe the field’s methods must rigorously address this complex, multilevel reality.

When are multilevel methods necessary in implementation science?

Given the multilevel nature of healthcare and public health service delivery, we propose that implementation researchers should always start with the default assumption that their research design will need to address multilevel context and related methodological issues, moving away from this assumption only after confirming that all the methodological decisions made place the study design completely in “single-level” research territory. A design is “single level” when all phenomena and theoretical constructs of interest occur at the same level within the implementation context, all observations and measurements occur at that level, and there is neither theoretically nor empirically important nesting of research participants or dependence of observations (as might be caused, for example, by longitudinal measurement of providers working in the same unit). Although single-level conditions could be met in implementation studies, we propose it is extremely rare. We believe the burden is on implementation scientists (as developers, consumers, and evaluators of research) to ensure multilevel methodological issues are properly addressed in every implementation study.

Challenges of multilevel research

Conducting methodologically rigorous multilevel studies is challenging. Such studies are often more complicated to design and execute than single-level studies [ 2 ]. Two (of many potential) examples of this complexity are difficulties associated with measuring implementation strategy and health intervention effects on outcomes at different levels and estimating their interaction effects across different levels [ 3 ]. As a result, conducting multilevel research tends to require a specific research skillset and a transdisciplinary approach [ 2 , 4 ]. Here, we use Choi and Pak’s definition (p. 359): “Transdisciplinarity integrates the natural, social and health sciences in a humanities context, and in so doing transcends each of their traditional boundaries”[ 5 ].

The multilevel research literature is highly specialized and dispersed across different disciplinary journals, which hinders a researcher’s ability to access and synthesize existing guidance, especially for those who do not have multilevel research training [ 6 ]. This training includes firm grounding in foundational multilevel literature (e.g., Kozlowski & Klein’s seminal book [ 7 ]) and the focused study of key theories (e.g., psychological theories that explain multilevel organizational behavior), constructs (e.g., emergence, “shared unit” constructs), and methodological approaches (e.g., quantitative multilevel modeling).

Acknowledging and accounting for the multilevel structure in implementation contexts can also be laborious, resource intensive, and costly [ 2 , 3 , 8 ]. Practical challenges include getting appropriate expertise on the research team, recruiting and enrolling a large number of organizations or service systems (each of which has different gatekeepers with varying priorities/concerns), completing informed consent procedures with multiple levels of interconnected participants, and managing varying concerns about protecting participant confidentiality (e.g., collecting data that could identify participants but are considered standard demographic information such as employee age and number of years at the organization) [ 8 ]. Obstacles can arise when university ethics committees are unfamiliar with multilevel designs and have to make judgment calls about what constitutes coercion (e.g., staff feeling pressure to participate by their organizational leaders), how to operationalize informed consent in multilevel contexts, and who owns and houses the data [ 8 ].

We recognize that currently, the supply for multilevel expertise in implementation science is low, and the demand is high, especially given the field’s relatively untapped relationships with partners who have this expertise (e.g., faculty in business schools). As such, at this time, it is not reasonable to expect every implementation research team to include a multilevel research expert who has all of the aforementioned training. Therefore, we write this paper with the hope that it is a first step in exposing the implementation research community to key multilevel research topics and resources such that we can begin to build capacity for conducting and elevate the quality of existing, multilevel research across the field as a whole.

Current literature

Researchers from several different disciplines have offered guidelines addressing multilevel research topics. Focusing on quantitative studies, González-Romá and Hernández [ 9 ] compiled an excellent list of multilevel topics, corresponding recommendations, and references. Topics include when and why multilevel methods are used, developing multilevel hypotheses, deciding between different quantitative analytic approaches (e.g., conventional multilevel modeling or multilevel structural equation modeling), and fitting a multilevel model [ 9 ]. As is evident in their table, each topic (1) covers content from its own separate set of references, (2) makes unique assumptions about the background knowledge readers need in order to follow the recommendations presented, and (3) is often field specific (e.g., management), a concern raised by Mathieu and Chen [ 4 ]. González-Romá and Hernández’s [ 9 ] table also highlights a dominant approach in the current set of multilevel research recommendations, that is, recommendations focused on quantitative multilevel modeling and specific topics therein [ 6 , 10 , 11 , 12 ]. Other existing literature includes broad reflections on the state of multilevel research in the context of a specific field (i.e., absent detailed design guidance) [ 8 ] and discussions related to the design and evaluation of multilevel interventions (a subtopic within the multilevel research field) [ 13 ].

The predicament of the implementation scientist interested in conducting multilevel research

Our eight characteristics draw from a realist ontological perspective, which holds that “entities exist independently of being perceived and independently of our theories about them” [ 14 ], as well as the multiple epistemological positions reflected within our authorship group and applied to projects depending on the research aims (e.g., post-positivism, social constructionism). We provide practical recommendations that are broadly applicable to all types of implementation research methodologies (i.e., quantitative, qualitative, and mixed methods). These recommendations are also relevant to any implementation research aim (e.g., implementing research-supported interventions, complex multilevel clinical practices, public health interventions, or policies) or study design (e.g., trials, observational studies) conducted at any level (or levels) of implementation contexts.

Again, recognizing that not every implementation researcher is, or can easily access, a multilevel research expert, we write this paper with these three goals in mind. First, to ease the reader’s burden of digesting a large body of specialized and divergent existing literature, we offer a cohesive set of research characteristics presented in a sequence that aligns with developing a research project (from research question formulation to evaluation). Second, to ease the burden of learning a new disciplinary language and reference points, we translate ideas from existing literature using constructs and practice examples familiar to an implementation research audience. Third, to be more inclusive of qualitative and mixed methods, we expand our focus beyond quantitative multilevel modeling. In sum, we echo Molina-Azorin and colleagues [ 2 ], with the intent of addressing the needs of the diverse implementation research community:

Our approach will be to see the ‘forest’ rather than some particular ‘trees.’ We examine the big picture, indicating the main elements of multilevel research. An exhaustive analysis of all the elements of multilevel research goes beyond the purpose of this methodological insight, but we provide key references in the literature that could be used…[with the hope that]…multilevel research brings us closer to the reality of [implementation] practice. pg. 2

Road map for this paper

Our list of eight characteristics can be used to inform new research or enhance existing studies. We also hope that journal editors, peer reviewers, and funders will use this information when assessing the quality of multilevel implementation research. Each characteristic below is a continuation of the following sentence stem: “To conduct rigorous, high-quality multilevel implementation research…” In the text, we provide the rationale for each characteristic’s inclusion and recommendations for its operationalization when designing or evaluating research. The Additional Files 1 – 8 accompanying each characteristic illustrate how readers can apply it practically and concretely. Additional file documents feature prompts, practical considerations, checklists, visual aids, curated references, additional implementation research examples, notes about applicable glossary terms, and detailed guidance for navigating particular issues (e.g., creating a multilevel sampling plan). For readers interested in a holistic view of how our characteristics apply to a single study, we offer Additional File 9 , which demonstrates the application of the characteristics in a mixed-methods, hybrid type III effectiveness-implementation trial called ASPIRE (for Adolescent and child Suicide Prevention in Routine clinical Encounters) [ 15 ]. The ASPIRE trial offers a unified, if imperfect, example of the characteristics because it incorporates (a) multiple levels of sampling with nested observations, (b) variables (i.e., antecedents, mediators, and outcomes) that occur at different levels, (c) constructs which represent shared unit characteristics which are measured through aggregation of individual responses, (d) randomization at the cluster level, and (e) both quantitative and qualitative analyses. Table 1 summarizes each characteristic and associated recommendations for implementation researchers; we envision it could be used as a simple planning tool or evaluative checklist. We also hope Table 1 encourages readers to use our eight characteristics as a whole, avoiding the problems associated with best practice misuse (e.g., cherry-picking specific sections to justify singular decisions while ignoring the others) [ 16 ].

1. Map and operationalize the specific multilevel context for defined populations and settings

Implementation studies are designed to make inferences about specific populations, which may consist of individuals, groups, organizations, or other systems that occur at specific levels in implementation contexts. Researchers should directly acknowledge these levels and their potential influence(s) on focal populations. Not doing so can lead to blind spots when conducting analyses and interpreting findings, and limit the generalizability of results. For example, a trial of an implementation strategy that identifies and equips clinical champions while focusing exclusively on clinic-level variables may ignore critical intra-clinic factors that may explain strategy effectiveness, such as variation in team-level leadership and characteristics of client/patient populations served [ 17 ].

Our recommendation for implementation researchers

Create and include a list or map of contextual levels most salient to the research question(s) and population(s) under study. This map should justify the inclusion and exclusion of specific levels within the research design based on the research question and theory about how focal antecedents, processes, or outcomes relate to each other. Table 2 presents an example of levels that may (or may not) be included in an implementation study depending on the context and aligned with the Consolidated Framework for Implementation Research (CFIR) and the Exploration, Preparation, Implementation, Sustainment (EPIS) framework [ 18 , 19 , 20 , 21 ]. Depending on the research questions, specific implementation studies may use only one or a few levels from this table (or some modification and expansion thereof). For more information on how to map the contextual levels within an implementation study, see Additional File 1 .

2. Define and state the level of each construct under study

After mapping the study’s multilevel context and associated populations, the next step is to define each construct and identify its level within the design. Clear construct definition is crucial because it provides the basis for the accurate construction of measures (Characteristic 5) and treatment of analytic variables (Characteristic 7) and supports appropriate interpretation of results (Characteristic 8) [ 7 ]. Constructs may include implementation determinants [ 22 , 23 ], implementation strategies [ 10 , 24 ], variables that are part of a mediation chain [ 25 ], variables that modify the effects of other antecedents (i.e., moderator, effect modifier), or implementation or clinical effectiveness outcomes [ 26 ].

For each construct under study, define (1) its substantive meaning (i.e., what is it?) and (2) the level at which it resides and its associated population unit (e.g., does it occur at the level of patient, provider, team, clinic, organization?) [ 27 ]. For each variable, provide an explanation or “mini theory” that clarifies why the construct is assigned to its specific level and population unit [ 7 ]. For example, a study of hospitals might invoke the concept of organizational culture (defined following Schein as “a pattern of shared basic assumptions learned by a group as it solved its problems of external adaptation and internal integration, which has worked well enough to be considered valid and, therefore, to be taught to new members as the correct way to perceive, think, and feel in relation to those problems” ([ 28 ] p. 18]), assign it to the level of hospitals (i.e., culture is a characteristic of hospitals), and use organizational culture theory to explain how culture emerges at the hospital level. This definition and theory would guide measurement and analytic decisions. For more information on how to do this, see Additional File 2 .

3. Describe how constructs relate to each other within and across levels

After defining each construct in terms of its meaning, level, and associated unit, investigators must clarify how study constructs relate to each other within and across levels. This step is essential to planning analyses.

Research plans should include a figure or narrative that describes the study’s theoretical model, including the level of each construct and the hypothesized relationships between constructs. We also suggest that researchers specify each construct’s causal ordering in the study theoretical model (e.g., is it an antecedent, mediator, moderator, consequent, primary, or secondary endpoint) [ 27 ]. Figure  1 provides an example. Drawing on theory and prior research, researchers should provide a rationale for the proposed relationships within the model.

figure 1

Example multilevel theoretical model.

Note: In this example, the study tests the relationships between three constructs which occur at different levels of the implementation context. The researchers hypothesize that variation in implementation climate across organizations will explain variation in provider competence to implement a focal intervention with fidelity which in turn will explain variation in the extent to which patients experience fidelity to the focal intervention during the course of treatment

When hypothesized relationships cross levels, researchers should identify and describe the processes through which antecedents at higher levels influence consequents at lower levels (i.e., top-down processes) or how antecedents at lower levels shape consequents at higher levels (i.e., bottom-up processes). The description should include theoretical justification for each cross-level effect to be examined in the study. For example, if an investigator hypothesizes that increased clinic implementation climate, defined as employees’ shared perceptions of the extent to which their clinic expects, supports, and rewards the use of a specific intervention with high fidelity [ 29 ], will increase provider self-efficacy to deliver an intervention with high fidelity, this cross-level relationship implies an increase in the clinic means of provider self-efficacy, and the research plan should describe how and why that would occur. Alternatively, an investigator might hypothesize that high levels of implementation climate [ 29 ] will decrease the dispersion or variability of provider attitudes around their clinic means (i.e., the climate will increase the level of agreement among providers within a clinic). Since both of these variables (i.e., implementation climate and the magnitude of variability in attitudes) represent characteristics of the clinic, they occur at the same level, and the research plan should state how this same-level process would occur. For more information on how to do this, see Additional File 3 .

4. Specify the temporal scope of each phenomenon at each relevant level

Rigorous multilevel implementation research requires thoughtful consideration of temporality (i.e., the sequence of events that unfold over calendar time) and pace of change (i.e., tempo or speed of change) as well as how these might differ across levels and align within a research design. People, organizations, and other systems change over time; however, the sequence or pace of change at one level may differ from that at another level [ 7 ]. For instance, an organization’s culture may be slow to change compared to specific aspects of policies or staffing. Additionally, organizations may change more quickly or slowly under different conditions. Externally imposed system reforms (e.g., funding shifts or policy changes) or crises (e.g., pandemics or natural disasters) may trigger more rapid change than internally planned changes. For instance, COVID-19 mitigation and other social distancing measures triggered a rapid shift from in-person service delivery to telehealth or other virtual platforms. Please see Additional File 4 for another example of this issue.

We recommend that researchers provide a detailed explanation of the expected temporal dynamics within their study at each level, using visual aids as needed, which includes the following: (1) when investigators expect to observe change in each outcome at each relevant level (e.g., of system- or organization-level implementation strategies), (2) how frequently and when constructs will be measured to capture these changes, (3) how changes in outcomes at different levels align with each other in the research design, and (4) the theoretical rationale for these choices. Research plans should draw on relevant theory and report the expected direction, shape, frequency, and/or tempo of anticipated change in focal constructs, at each level and across levels, with decisions about measurement frequency and timing linked to these theoretical expectations. Measurement intervals and durations may differ at each level depending on the expected temporal dynamics and emergent issues. For more information on how to do this, see Additional File 4 .

5. Align measurement choices and construction of analytic variables with the levels of theories selected (and hypotheses generated, if applicable)

The operationalization and measurement of variables must align with theory so that inferences about selected constructs accurately reflect target levels and populations. Put simply, measurement must align with the level of theory. By level of theory, we mean the level at which the construct has been defined (i.e., in Characteristic 2). An example of measurement-theory mis alignment is using individually varying scores to measure a theoretically shared organizational characteristic such as organizational climate. Ensuring measurement-theory alignment requires investigators to understand the theoretical assumptions embedded within each of the study’s constructs (e.g., organizational climate assumptions) and provide evidence that the measurements taken conform to those theoretical assumptions [ 7 ].

Align the levels of theory and measurement. Such alignment is often most difficult for unit-level constructs; however, the organizational research literature offers a useful typology of categories of variables (global, shared, and configural) to aid investigators in this task [ 7 ]. G lobal constructs are those that originate at the unit level and represent objective, easily observable characteristics of the unit. Examples of global constructs include the type of hospital ward or unit (e.g., pediatric, intensive care) and the number of patients seen by the unit in a year. S hared constructs originate at the individual level but are shared across unit members [ 7 ]. An example of a shared construct is clinic implementation climate [ 29 ]. Note that even though clinic implementation climate originates at the provider level (i.e., in individual provider perceptions), it is conceptualized as a characteristic of the unit because it is a shared, contextual feature of the work environment. C onfigural constructs originate at the individual level and represent a pattern of individual characteristics within the unit [ 7 ]. Examples of configural constructs include variation in years of clinical experience on a team or diversity of professional roles within the team, or the optimal performance by a single member of the team. This typology directly informs the selection of appropriate measurement approaches and guides the type of validity evidence investigators should provide to demonstrate alignment between theories and measures. For example, investigators may need to provide evidence of within-clinic agreement on clinician perceptions of implementation climate in order to support the validity of the clinic implementation climate construct within their study [ 30 ]. For more information on how to align the levels of theory and measurement, see Additional File 5 .

6. Use a sampling strategy consistent with the selected theories or research objectives and sufficiently large and variable to examine relationships at requisite levels

In multilevel studies, there are different sample sizes and sampling plans at each level of the design. For example, in a study of community health workers embedded within primary care clinics, the inferences drawn will be shaped by the samples of clinics and by the sample of workers within each clinic. As with all samples, investigators must attend to the number of participants necessary to generate appropriate statistical or theoretical inferences, the distribution of participants’ characteristics (to ensure adequate variability), and their representativeness of a target population. However, this logic applies separately to each level’s specific sample(s).

Special attention is often needed to ensure that the number and representativeness of participants within each higher-level unit are adequate to address the research questions and are aligned with the theoretical or conceptual model. For example, how representative of a clinic are participants’ responses if only two of ten workers complete study surveys? What is the minimum number of participants needed per clinic? What are the implications of variation across clinics in their within-clinic response rates?

Given the considerations above, multilevel implementation studies should be designed to ensure there is (1) a large enough sample at each level to test hypotheses or make theoretical inferences rigorously, (2) adequate variability within the sample at each level to achieve these objectives, and (3) representativeness of the achieved sample at each level (for quantitative). To help readers of research reports assess these study characteristics, we recommend that quantitative multilevel implementation studies report the following: (1) the distribution and range of within-unit sample sizes, including a measure of central tendency (median/mean), dispersion (standard deviation), and minimum and maximum values (e.g., median, minimum, and maximum number of providers and/or patients per clinic); (2) the distribution and range of within-unit response rates (e.g., calculate the survey response rate within each clinic and report the mean, standard deviation, minimum, and maximum response rates); (3) a statistical comparison of the characteristics of unit members who responded versus nonresponders; and (4) the theoretical or empirical rationale for exclusion of units (e.g., on the basis of response rates or number of participants).

In qualitative studies, the goal is typically not to obtain a representative sample but to purposefully select cases or participants that meet preselected criteria that address the study’s research questions. Nonetheless, it is critical to ensure that investigators sample at all specified levels for analytic purposes, striving for sufficient sample sizes of the population units at each pertinent level and attending to consistencies, contradictions, and interconnections across levels. For example, in a study examining an organizational implementation strategy, investigators will likely be interested not just in what executives say about change processes related to an innovation’s uptake but also in triangulated data from first-level leaders (i.e., those who supervise providers) and direct service providers. Sampling at the different levels enables a more nuanced perspective on the interplay between levels and how they might influence each other. For more information on designing and justifying a quantitative or qualitative multilevel sampling plan, see Additional File 6 .

7. Align analytic approaches with the chosen theories (and hypotheses, if applicable), ensuring that they account for measurement dependencies and nested data structures

Although there is no single best way to analyze data from multilevel implementation studies, investigators must ensure that analytic choices (1) account for the dependencies that arise in hierarchically sampled observations and (2) align with the study’s level(s) of theory and hypotheses or research aims [ 9 ]. This applies equally to quantitative, qualitative, and mixed-methods designs. Traditional quantitative data analytic approaches, such as ordinary least squares regression and t -tests, assume observations are independently sampled and thus uncorrelated. Statistical inferences are biased when this assumption is violated, as often occurs in multilevel designs where observations at a lower level (e.g., patient outcome scores) are nested within higher-level units (e.g., providers).

In qualitative studies, researchers can query the extent to which there is agreement or disagreement within levels (e.g., perceptions of leadership among a clinical team) and across levels (e.g., perceptions of leadership that vary between leader peer reports and subordinate reports of that leader) [ 31 , 32 ]. Qualitative research can also help elucidate the kinds of complex nested relationships present within an implementation context [ 33 ] and can, therefore, provide valuable insight into what is most important to address related to levels of nesting. Qualitative research centered on process and the real-world interplay occurring across levels is especially useful for describing and contextualizing these dependencies while shedding light on how they likely operate to influence outcomes [ 34 ]. In addition, in the process of conducting qualitative research, we might identify new samples we may have not considered previously with participants who might have fresh insights into multilevel phenomena we are seeking to analyze.

We recommend investigators directly acknowledge nesting and dependencies (i.e., correlated observations) within the proposed study design, articulate what analytic method has been selected to account for those features (or analytically demonstrate that the dependencies are not substantial enough to be a concern), and provide a rationale for the choice of analysis approach with reference to specific characteristics of the data and strengths of the selected model. For example, a quantitative study that measures fidelity to an intervention at the session level may need to account for the nesting of sessions within patients, nesting of patients within providers, and nesting of providers within clinics, depending on the specific sampling design and focus of the investigation. An analytic approach would be selected that addresses this nesting and a rationale provided for its use in this study.

For quantitative studies, we recommend that investigators ensure that variables enter statistical models at the level warranted and scrutinize choices related to centering, standardization, and calculation of effect sizes to confirm they reflect the study’s multilevel design [ 24 , 35 ]. For randomized trials, the variable representing randomization to condition (i.e., exposure) should enter the statistical model at the level at which randomization occurs [ 36 ]; this often has significant implications for statistical power and sample size, particularly when the emphasis is on testing hypothesized mediators of implementation strategies’ effects [ 10 ].

The use of qualitative methods, such as participant observation, interviews, focus groups, and periodic reflections, is crucial to contextualizing and interpreting quantitative findings regarding dependencies and nesting while also offering in-depth insight into the range of anticipated and unanticipated factors emerging in real time that shape implementation processes and outcomes [ 37 , 38 , 39 , 40 ]. A variety of qualitative analytic techniques can be brought to bear in multilevel implementation research, including deductive techniques applying theoretical model constructs to support existing conceptualizations to test and validate theory and inductive techniques to generate new concepts, explanations, or theories from study data. Regardless of the approach taken, the key assessment criteria for analysis and interpretation of qualitative data center on ensuring a solid grasp of background issues and theory and a firm grounding in the data collected. Procedures that enhance the rigor and credibility of qualitative findings include investigating rival explanations pertinent to the phenomena of interest, accounting for disconfirming evidence and irregularities, and undertaking triangulation (within and across methods) [ 41 , 42 , 43 ]. The more data sources, the better. Triangulation practices typically entail summarizing analyses of all data sources and conducting side-by-side comparisons designed to corroborate and expand upon findings to create a complete or holistic picture of implementation processes and outcomes at the specified levels of interest [ 42 ].

Whatever analytic strategies are used to address multilevel designs in implementation research, we recommend investigators be transparent and thorough in reporting details of the analytic approach. We offer this general rule: as analytical complexity and decision points in an analysis increase, so should the level of description of the methods either in text or in a supplemental file. We also suggest investigators consider developing and sharing crosswalks that specify research questions and justify the use of data collection tools and their accompanying analytic techniques, defining their multilevel purpose and (anticipated) contributions, including “explicit connections” or “intentional redundancies” among quantitative and qualitative approaches [ 33 ]. Finally, we recommend that investigators make analytic tools (e.g., qualitative interview guides, statistical code) accessible to end users of multilevel research reports. For more information on how to create qualitative, quantitative, and mixed-methods multilevel analysis plans, see Additional File 7 .

8. Ensure inferences are made at the appropriate level

When analyses are complete, investigators must ensure inferences are made at the appropriate level(s). Most researchers who discuss this issue [ 7 , 44 , 45 , 46 ] focus on two primary fallacies regarding inferences from multilevel research: the atomistic fallacy and the ecological fallacy. The atomistic fallacy occurs when investigators analyze the association between variables at the individual level and then inappropriately make inferences about a higher level of analysis, such as groups or organizations [ 46 ]. Because the association between two variables at the individual level may differ from the association between the same or analogous variables at the group level, it is inappropriate to infer group-level relationships based on individual-level analyses [ 46 ]. For an implementation research example, see Additional File 8 .

The ecological fallacy occurs when investigators conduct studies at a higher level of analysis (e.g., group, organization, or country) and inappropriately make inferences about lower-level units (e.g., individuals) [ 7 ]. Investigators should not use inferences based on data at the group level to substantiate relationships at lower levels of analysis for the same reason described for the atomistic fallacy. More specifically, the association between two variables at the group level may differ from the association between the same or analogous variables at the individual level. For an implementation research example, see Additional File 8 . As Chan [ 45 ] highlighted, both fallacies are ultimately conceptual fallacies about interpreting results.

Given these considerations, we recommend investigators carefully craft and check language within research reports and presentations to ensure atomistic and ecological fallacies are not present. Precise language is needed to describe the level of the constructs when discussing results. For instance, a conclusion like “higher readiness for change was associated higher fidelity” is vague about the level, as opposed to “higher unit-level readiness for change was associated with higher provider-level fidelity.” We suggest investigators increase their awareness of these fallacies and build in processes to check their assumptions when interpreting results from multilevel studies. We also recommend following Chan’s guidance to conduct multilevel analyses that appropriately account for variance within and between levels so that “analysis and interpretations can be aligned to avoid the conceptual problem of making inferences at the wrong level” ([ 45 ] p. 405).

Implementation research is inherently multilevel. Building strong multilevel theories that explain the reality of implementation requires rigorous studies. Although the degree to which investigators account for this reality in their work may vary, as may the specific levels assessed in a particular study, we can meaningfully advance implementation science by articulating and enacting achievable standards of rigor for what constitutes high-quality multilevel research. We believe that shared standards of rigor can improve the quality, transparency, generalizability, and replicability of multilevel implementation research. In this paper, we took the first step in establishing and communicating such standards by distilling and translating key concepts from other fields (emphasizing the organizational sciences) for an implementation science audience.

Table 1 concisely summarizes our eight characteristics and associated recommendations for implementation researchers. Our eight characteristics are structured to guide the early conceptualization and grant-writing process. They are also intended to support investigators as they move through decision-making at each research phase — from research question formulation, variable selection and measurement, analysis, and the interpretation of findings. We hope these characteristics promote a common language and provide an initial template for planning for and evaluating the quality of multilevel implementation research. We also hope that acknowledging these characteristics will push the field forward in building testable multilevel theories that capture the complexities, and addresses the needs of implementation research.

These theories can be examined and tested using a range of designs and approaches. However, the complexity inherent in implementation research calls for other innovative approaches to understanding complex multilevel contexts. Systems science approaches (e.g., social network analysis, agent-based modeling, and systems dynamics) that account for nonlinearities, interdependencies, and cross-level phenomena have strong potential for expanding and testing multilevel theories [ 47 ]. However, even with these types of approaches, it is important to be clear about the ways in which within-level and across-level phenomena operate and interact. Our future work will delve into specific technical considerations and offer more detailed guidance for conducting multilevel research using traditional quantitative, qualitative, and mixed-methods approaches.

Availability of data and materials

Not applicable.

Eccles MP, Mittman BS. Welcome to implementation science. Implement Sci. 2006;1:1–3.

Article   PubMed Central   Google Scholar  

Molina-Azorín JF, Pereira-Moliner J, López-Gamero MD, Pertusa-Ortega EM, Tarí JJ. Multilevel research: foundations and opportunities in management. BRQ Bus. Res. Q. 2019; online.

Cleary PD, Gross CP, Zaslavsky AM, Taplin SH. Multilevel interventions: study design and analysis issues. J Natl Cancer Inst Monogr. 2012;44:49–55.

Article   Google Scholar  

Mathieu JE, Chen G. The etiology of the multilevel paradigm in management research. J Manage. 2011;37:610–41.

Google Scholar  

Choi BCK, Pak AWP. Multidisciplinarity, interdisciplinarity and transdisciplinarity in health research, services, education and policy: definitions, objectives, and evidence of effectiveness. Clin Invest Med. 2006;6:351–64.

Aguinis H, Gottfredson RK, Culpepper SA. Best-practice recommendations for estimating cross-level interaction effects using multilevel modeling. J Manage. 2013;39:1490–528.

Kozlowski SWJ, Klein KJ. A multilevel approach to theory and research in organizations: contextual, temporal, and emergent properties. Multilevel theory, research, and methods in organizations: foundations, extensions, and new directions. San Francisco, CA: Jossey-Bass; 2000. p. 3–90.

Kulik CT. Climbing the higher mountain: the challenges of multilevel, multisource, and longitudinal research designs. Manag Organ Rev. 2011;7:447–60.

González-Romá V, Hernández A. Conducting and evaluating multilevel studies: recommendations, resources, and a checklist. Organ Res Methods. 2022;online.

Williams NJ, Preacher KJ, Allison PD, Mandell DS, Marcus SC. Required sample size to detect mediation in 3-level implementation studies. Implement Sci. 2022;17:66.

Article   PubMed   PubMed Central   Google Scholar  

Peugh JL. A practical guide to multilevel modeling. J Sch Psychol. 2010;48:85–112.

Article   PubMed   Google Scholar  

Lane SP, Hennes EP. Power struggles. J Soc Pers Relat. 2018;35:7–31.

Paskett E, Thompson B, Ammerman AS, Ortega AN, Marsteller J, Richardson D. Multilevel interventions to address health disparities show promise in improving population health. Health Aff. 2016;35:1429–34.

Phillips DC. Philosophy, science and social inquiry: contemporary methodological controversies in social science and related applied fields of research. Pergamon Press; 1987.

Beidas RS, Ahmedani BK, Linn KA, Marcus SC, Johnson C, Maye M, et al. Study protocol for a type III hybrid effectiveness-implementation trial of strategies to implement firearm safety promotion as a universal suicide prevention strategy in pediatric primary care. Implement Sci. 2021;16:89.

Kreamer LM, Albritton BH, Tonidandel S, Rogelberg SG. The use and misuse of organizational research methods ‘best practice’ articles. Organ Res Methods. 2023;26:387–408.

Aarons GA, Ehrhart MG, Farahnak LR, Hurlburt MS. Leadership and organizational change for implementation (LOCI): a randomized mixed method pilot study of a leadership and organization development intervention for evidence-based practice implementation. Implement Sci. 2015;10:1–2.

Damschroder, LJ, Reardon, CM, Widerquist, MAO, Lowery J. The updated Consolidated Framework for Implementation Research based on user feedback. Implementation Sci. 2022;17:75.

Moullin JC, Dickson KS, Stadnick NA, Rabin B, Aarons GA. Systematic review of the Exploration, Preparation, Implementation, Sustainment (EPIS) framework. Implementation Sci. 2019;14:1.

Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementation Science. 2009;4.

Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38:4–23.

Flottorp SA, Oxman AD, Krause J, Musila NR, Wensing M, Godycki-Cwirko M, et al. A checklist for identifying determinants of practice: a systematic review and synthesis of frameworks and taxonomies of factors that prevent or enable improvements in healthcare professional practice. Implement Sci. 2013;8:1–1.

Chaudoir SR, Dugan AG, Barr CH. Measuring factors affecting implementation of health innovations: a systematic review of structural, organizational, provider, patient, and innovation level measures. Implement Sci. 2013;8:1–20.

Enders CK, Tofighi D. Centering predictor variables in cross-sectional multilevel models: a new look at an old issue. Psychol Methods. 2007;12:121–38.

Williams NJ. Multilevel mechanisms of implementation strategies in mental health: integrating theory, research, and practice. Adm Policy Ment Health. 2016;43:783–98.

Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health. 2011;38:65–76.

Klein KJ, Dansereau F, Hall RJ. Levels issues in theory development, data collection, and analysis. Acad Manage Rev. 1994;19:195–229.

Schein EH. Organizational culture and leadership. 4th ed. San Francisco, CA: Jossey-Bass; 2010.

Ehrhart MG, Aarons GA, Farahnak LR. Assessing the organizational context for EBP implementation: the development and validity testing of the Implementation Climate Scale (ICS). Implement Sci. 2014;9:1–1.

Jacobs SR, Weiner BJ, Bunger AC. Context matters: measuring implementation climate among individuals and groups. Implement Sci. 2014;9:46.

Kitzinger J. Qualitative research: introducing focus groups. BMJ. 1995;311:299–302.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Zade H, Drouhard M, Chinh B, Gan L, Aragon C. Conceptualizing disagreement in qualitative coding. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. New York, NY, USA: ACM; 2018. p. 1–11.

Headley MG, Plano Clark VL. Multilevel mixed methods research designs: advancing a refined definition. J Mix Methods Res. 2020;14:145–63.

Aguinis H, Molina-Azorín JF. Using multilevel modeling and mixed methods to make theoretical progress in microfoundations for strategy research. Strateg Organ. 2015;13:353–64.

Hofmann D. Centering decisions in hierarchical linear models: implications for research in organizations. J Manage. 1998;24:623–41.

Klein KJ, Kozlowski SWJ. From micro to meso: critical steps in conceptualizing and conducting multilevel research. Organ Res Methods. 2000;3:211–36.

Hohmann AA, Shear MK. Community-based intervention research: coping with the “noise” of real life in study design. Am J Psychiatry. 2002;159:201–7.

Hamilton AB, Finley EP. Qualitative methods in implementation research: an introduction. Psychiatry Res. 2019;280:epub.

Finley EP, Huynh AK, Farmer MM, Bean-Mayberry B, Moin T, Oishi SM, et al. Periodic reflections: a method of guided discussions for documenting implementation phenomena. BMC Med Res Methodol. 2018;18:153.

Getrich C, Heying S, Willging C, Waitzkin H. An ethnography of clinic “noise” in a community-based, promotora-centered mental health intervention. Soc Sci Med. 2007;65:319–30.

Lincoln YS, Guba EG. Naturalistic inquiry. Thousand Oaks, CA: Sage; 1985.

Book   Google Scholar  

Patton M. Qualitative research & evaluation methods. 4th ed. Thousand Oaks, CA: Sage Publications, Inc.; 2015.

Miles MB, Huberman AM, Saldaña J. Qualitative data analysis: methods Sourcebook. 4th ed. Thousand Oaks, CA: Sage; 2020.

Bliese PD. Within-group agreement, non-independence, and reliability: implications for data aggregation and analyses. In: Klein KJ, Kozlowski SWJ, editors. Multilevel theory, research and methods in organizations: foundations, extensions, and new directions. San Francisco, CA: Jossey-Bass; 2000. p. 349–81.

Chan D. Multilevel research. In: Leong FTL, Austin JT, editors. The psychology research handbook. 2nd ed. Thousand Oaks, CA: Sage; 2006. p. 401–18.

Diez Roux A v. A glossary for multilevel analysis. J Epidemiol Community Health. 2002;56:588–594.

Luke DA, Stamatakis KA. Systems science methods in public health: dynamics, networks, and agents. Annu Rev Public Health. 2012;33:357–76.

Download references


The authors wish to acknowledge members of the University of Pennsylvania NIMH ALACRITY Center (P50MH113840) whose thoughtful insights, conversations, and feedback on ideas in this manuscript helped sharpen our thinking. We also wish to acknowledge the support of Drs. Kristin Linn, Steven Marcus, and Dylan Small for their work designing the ASPIRE trial.

Dr. Lengnick-Hall is funded by NIMH P50MH113662. Dr. Williams is funded by R01MH119127 and R21MH126076. The ASPIRE trial (Beidas, PI) is funded by R01 MH123491. Dr. Beidas is also funded by P50CA244690, R33HL161752, U01HL159880, and R01NR019753. Dr. Aarons is funded by R01DA049891, P50MH126231, U01CA275118, and G11TW011841. Dr. Willging is funded by R01 NR021019 and R01HD083399. Additionally, Drs. Lengnick-Hall and Bunger are alumni, Drs. Aarons and Beidas are core faculty, and Dr. Williams is an expert faculty, with the NIMH Implementation Research Institute (IRI) at the George Warren Brown School of Social Work, Washington University, in St. Louis through an award from NIMH (R25 MH080916-08). The funding bodies played no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript.

Author information

Rebecca Lengnick-Hall and Nathaniel J. Williams contributed equally to this work.

Authors and Affiliations

The Brown School, Washington University in St. Louis, St. Louis, MO, USA

Rebecca Lengnick-Hall

School of Social Work, Boise State University, Boise, ID, USA

Nathaniel J. Williams

Department of Psychology, University of Central Florida, Orlando, FL, USA

Mark G. Ehrhart

Southwest Center, Pacific Institute for Research and Evaluation, Albuquerque, NM, USA

Cathleen E. Willging

College of Social Work, The Ohio State University, Columbus, OH, USA

Alicia C. Bunger

Medical Social Sciences, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA

Rinad S. Beidas

Department of Psychiatry, UC San Diego ACTRI Dissemination and Implementation Science Center, University of California-San Diego, La Jolla, San Diego, CA, USA

Gregory A. Aarons

You can also search for this author in PubMed   Google Scholar


RLH and NJW led the conceptual development and structure of the manuscript. All authors contributed to initial drafts of the recommendations and participated in decisions about what information to prioritize and how to tailor the information with implementation research examples. MGE provided detailed support on the references included, and RSB provided the details of the ASPIRE trial example. RLH and NJW co-drafted the manuscript, and MGE, CEW, ACB, RSB, and GAA reviewed and edited manuscript sections. All authors reviewed and revised several iterations of the manuscript and approved the final version.

Corresponding author

Correspondence to Rebecca Lengnick-Hall .

Ethics declarations

Ethics approval and consent to participate.

Ethics approval and consent are not applicable for the current manuscript because no human subject data were collected or utilized.

Consent for publication

Competing interests.

GAA is a co-editor in chief on the editorial board, and RSB is an associate editor of Implementation Science and Implementation Science Communications . ACB and CEW are members of the editorial board of Implementation Science . All decisions on this paper were made by another editor. RSB is principal at Implementation Science & Practice, LLC. She receives royalties from Oxford University Press and consulting fees from United Behavioral Health and OptumLabs and serves on the advisory boards for Optum Behavioral Health, AIM Youth Mental Health Foundation, and the Klingenstein Third Generation Foundation outside of the submitted work. All other authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1:.

Characteristic 1. Map and operationalize the specific multilevel context for defined populations and settings.

Additional file 2:

Characteristic 2. Define and state the level of each construct under study.

Additional file 3:

Characteristic 3. Describe how constructs relate to each other within and across levels.

Additional file 4:

Characteristic 4. Specify the temporal scope of each phenomenon at each relevant level.

Additional file 5:

Characteristic 5. Align measurement choices and construction of analytic variables with the levels of theories selected (and hypotheses generated, if applicable).

Additional file 6:

Characteristic 6. Use a sampling strategy consistent with the selected theories or research objectives and sufficiently large and variable to examine relationships at requisite levels.

Additional file 7:

Characteristic 7. Align analytic approaches with the chosen theories (and hypotheses, if applicable), ensuring that they account for measurement dependencies and nested data structures.

Additional file 8:

Characteristic 8. Ensure inferences are made at the appropriate level.

Additional file 9:

An Integrated Example. The ASPIRE trial.

Map and operationalize the specific multilevel context for defined populations and settings.

Context refers to the totality of space, time, and matter around a healthcare encounter. The combination of these terms indicates context has a hierarchical structure comprised of levels (using the definition of level provided below).

A position within a hierarchical, nested structure. In implementation science, healthcare is delivered to patients, by providers, within organizations, and within larger systems. Each population in this chain (patients, providers, organizations, larger systems) is nested within another population. That is, multiple patients are cared for by a single provider, multiple providers work in a single organization, and multiple systems occur within a sociopolitical context such as a nation. Therefore, each population represents a level within the implementation context. By definition, levels are associated with specific populations.

A formal group of more than one person. Examples include teams, departments, divisions, wards, or clinics. A unit can also be conceptualized as an organization or system.

Define and state the level of each construct under study.

The actual, observed value used to represent a construct within a quantitative or qualitative analysis.

The level of the implementation context that is the center of attention for a particular research question. The focal level often refers to a location in the nested, contextual hierarchy at which a key variable of interest resides and/or is expected to affect.

For the purposes of our paper, a unit-level construct/property/characteristic describes a feature, quality, or state of a unit. It may be observable or latent. A unit-level construct/property/characteristic can be further categorized as global, shared, or configural (see Characteristic 5 below). Examples for implementation research include clinic implementation climate, department safety climate, team demographic composition (e.g., in terms of workforce diversity), agency size, and site proximity to a university. See definitions for level and unit in Characteristic 1.

Describe how constructs relate to each other within and across levels.

A sequence or series of events, or actions taken, in a specific order toward a specific outcome, which begin at a lower level and terminate at a higher level. An example is increased motivation among individual clinicians within a team to use a screening tool may lead to increased leader advocacy for funding for use of the tool (in response to the groundswell of support from clinicians), which may lead to increased funding available for the tool and greater reach of the tool to more patients within the organization.

A sequence or series of events, or actions taken, in a specific order toward a specific outcome, which begin at a higher level and terminate at a lower level. An example is focused organizational implementation climate increasing individual clinicians’ self-efficacy to deliver an intervention with fidelity resulting in patients experiencing high fidelity to the intervention during service interactions.

Align measurement choices and construction of analytic variables with the levels of theories selected (and hypotheses generated, if applicable).

The process of combining responses from individuals to the unit level through some operation (often by taking the mean). For example, clinicians’ individual perceptions of their clinic’s climate could be aggregated by taking the mean of their responses to represent their shared experience (i.e., the unit climate).

An operationalization or measurement of a unit-level construct that is derived from observations obtained from individual within a unit and represents the configuration or pattern of the individual responses or observations.

An operationalization or measurement of a unit-level construct that is derived from observations obtained from multiple individuals within the unit who are believed to be affected by the construct and whose perceptions or experiences are believed to converge and coalesce around a shared experience. For example, organizational implementation climate is often measured by collecting individual perception ratings from providers within a unit, and the average of these scores is taken to represent the shared perception, i.e., focused climate, of the unit. Implementation researchers should provide evidence to support the validity of compositional variables when they are used. For example, an assessment of inter-rater agreement should be employed to show that providers within each unit agreed with each other on their perceptions of focused climate. This confirms that the climate perceptions were shared, and that the compositional variable of climate, which enters models at the unit-level, is indeed a shared characteristic of the unit.

A characteristic of a unit that represents the pattern, variability, or configuration of individuals’ characteristics or contributions within the unit. Examples include the level of diversity in a team’s years of experience or the network density of relationships among organization members. These properties emerge at the individual level but are not assumed to coalesce or converge among unit members; instead, individuals make distinct contributions to the pattern or configuration in the unit which combine in complex, nonlinear processes to generate the unit-level property. In defining configural properties, investigators should explain the processes by which unique individual contributions combine to form the unit-level characteristic. Operationalized measures of configural constructs are sometimes called compilation variables (see more above).

A characteristic of a unit that is material, descriptive, typically easily observable, and originates at the level of the unit. Group size or unit function is examples. Global characteristics do not have their basis in individuals’ (or lower-level units’) characteristics (or interactions), and thus, there is no possibility of within-unit variation.

The Oxford English dictionary defines referent as “the thing that a word or phrase denotes or stands for.” In the context of multilevel implementation research, we use this term to refer to the person or unit to which a measure applies. For example, if someone is being asked to rate “leadership” within their unit, the items should have a referent which indicates which leader they are being asked to rate. If someone is asked to rate climate within their work environment, the item should have a referent of which unit’s climate they are rating. For example, are they rating their immediate team, their whole clinic, or their whole organization?

A characteristic of a unit that is common to unit members based on the convergence or coalescence of individuals’ experiences, perceptions, attitudes, values, cognitions, affect, or behavior. Shared unit characteristics originate at the individual level and emerge as a unit characteristic as a function of attraction, selection, attrition, socialization, social interaction, shared sense-making, group adaption, leadership, and other psychological processes. Organizational culture and EBP implementation climate are examples. When implementation researchers incorporate shared unit characteristics into their studies, it is especially important that they specify the processes believed to generate high levels of within-group agreement and consistency across individuals as well as provide evidence that the characteristic in question is truly shared across individuals; demonstration of within-group agreement or convergence helps support the construct validity of shared unit-level constructs. Operationalized measures of shared constructs are sometimes called compositional variables (see more above).

See definition in Characteristic 2 above.

Very simply, “validity evidence” represents data and analyses brought to bear to show that a variable represents what it is supposed to represent. Messick (1989) defined validity as “an integrated evaluative judgment of the degree to which empirical evidence and theoretical rationales support the adequacy and appropriateness of inferences and actions based on test scores and other modes of assessment” (p. 13). The Standards for Educational and Psychological Testing of the American Educational Research Association, American Psychological Association, and National Council on Measurement in Education (1999) calls for investigators to generate evidence of the validity of inferences generated from measures. For a given set of items which comprise a measure, traditional types of validity evidence relate to content coverage, response processes, internal structure, and relations to other variables. In multilevel studies, it is important to provide validity evidence for compositional variables such as implementation climate, to show that they represent a shared characteristic of a unit.

Use a sampling strategy consistent with the selected theories or research objectives and sufficiently large and variable to examine relationships at requisite levels. Sampling plan — We use the definition of sampling plan offered by the US National Institute of Standards and Technology in its Engineering and Statistics Handbook: “a sampling plan is a detailed outline of which measurements will be taken at what times, on which material, in what manner, and by whom…Sampling plans should be designed in such a way that the resulting data will contain a representative sample of the parameters of interest and allow for all questions, as stated in the goals, to be answered.”

Ensure inferences are made at the appropriate level.

When investigators analyze the association between variables at the individual level and then inappropriately make inferences about a higher level of analysis, such as groups or organizations.

When investigators conduct studies at a higher level of analysis (e.g., group, organization, or country) and inappropriately make inferences about lower levels units (e.g., individuals).

The level within the implementation context of which research data are representative. For example, measures of team size represent the team level of analysis, and measures of individual provider self-efficacy represent the individual provider level of analysis.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit . The Creative Commons Public Domain Dedication waiver ( ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Cite this article.

Lengnick-Hall, R., Williams, N.J., Ehrhart, M.G. et al. Eight characteristics of rigorous multilevel implementation research: a step-by-step guide. Implementation Sci 18 , 52 (2023).

Download citation

Received : 06 March 2023

Accepted : 09 September 2023

Published : 23 October 2023


Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Research methods
  • Research reporting
  • Research best practices

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

example of research design in qualitative research paper


  1. Example Of Research Paper Qualitative : Qualitative Research Examples

    example of research design in qualitative research paper

  2. Example Of Research Paper Qualitative / Qualitative Research Examples

    example of research design in qualitative research paper

  3. Research Design And Methodology Sample Thesis Qualitative

    example of research design in qualitative research paper

  4. 8+ Qualitative Research Proposal Templates

    example of research design in qualitative research paper

  5. Example Of Qualitative Research Design Paper

    example of research design in qualitative research paper

  6. Examples Of Qualitative Research Paper : (PDF) The Town Hall Focus

    example of research design in qualitative research paper


  1. Choosing A Research Topic

  2. Research Designs: Part 2 of 3: Qualitative Research Designs (ሪሰርች ዲዛይን

  3. Qualitative Research || Research

  4. Qualitative vs Quantitative Research

  5. Phenomenological Research Design

  6. how to write a research paper l 8 tips for conducting accurate research


  1. What Is a Research Design

    Step 1: Consider your aims and approach Step 2: Choose a type of research design Step 3: Identify your population and sampling method Step 4: Choose your data collection methods Step 5: Plan your data collection procedures Step 6: Decide on your data analysis strategies Other interesting articles Frequently asked questions about research design

  2. Qualitative Research

    Qualitative Research - Methods, Types, and Examples 16 min read Published on: Dec 25, 2017 Last updated on: Oct 28, 2023 There are various methods for conducting scientific research. The two broad approaches to data collection include qualitative and quantitative research methods.

  3. 8 Types of Qualitative Research Methods With Examples

    8 Types of Qualitative Research - Overview & Examples 16 min read Published on: Dec 29, 2017 Last updated on: Oct 25, 2023 What is Qualitative Research? Types of Data Analysis in Qualitative Research Are you overwhelmed by the multitude of qualitative research methods available?

  4. How to Write a Research Design

    For example: Experimental research involves experimental investigation and laboratory experience, but it does not accurately investigate the real world. Quantitative research is good for the statistical part of the project, but it may not provide an in-depth understanding of the topic.

  5. PDF Students' Perceptions towards the Quality of Online Education: A

    there has been a paucity of research conducted on students' perceptions toward the quality of online education. This study utiliz ed qualitative methods to investigate the perceptions of students from two universities and one community college regarding the quality of online education based on their own online learning experiences.

  6. (PDF) Qualitative Research Design

    March 2013 · MIS Quarterly. Hillol Bala. Mixed methods research is an approach that combines quantitative and qualitative research methods in the same research inquiry. Such work can help develop ...

  7. Planning Qualitative Research: Design and Decision Making for New

    The four qualitative approaches we include are case study, ethnography, narrative inquiry, and phenomenology. Indeed, there are other approaches for conducting qualitative research, including grounded theory, discourse analysis, feminist qualitative research, historical qualitative research, among others.

  8. (PDF) Qualitative Research Paper

    This paper examines the relevance of qualitative research methodology as a systematic method of inquiry that seeks to build a holistic approach that is largely narrative, a description to...

  9. Qualitative Research Design

    Qualitative researchers are concerned with making inference based on perspective, so it is extremely important to get as much data as possible for later analysis. Researchers spend a considerable amount of time designing interview questions. Interviews are designed to generate participant perspectives about ideas, opinions, and experiences. 2.

  10. (PDF) Quantitative Research Designs

    The study adopted a cross-sectional research design and quantitative research approach using a sample of 300 respondents from the six public hospitals in Tanzania.

  11. (PDF) Qualitative Case Study Methodology: Study Design and

    This study uses a convergent/parallel mixed methods research design, including quantitative and qualitative research designs. The sample size of the quantitative part of the study is 135 early ...


    Variables: No variables used for qualitative research 4. Sample size: Target 15 -20 (You can state a target number but the sample size is dependent upon the point of saturation. Saturation is defined as the point at which no new themes are emerging. Nevertheless, it is also acceptable for students to indicate a target number determined by you and

  13. PDF Sample of the Qualitative Research Paper

    In the following pages you will find a sample of the full BGS research qualitative paper with each section or chapter as it might look in a completed research paper beginning with the title page and working through each chapter and section of the research paper. QUALITATIVE RESEARCH PAPER 46

  14. Research Design

    Step 1: Consider your aims and approach Step 2: Choose a type of research design Step 3: Identify your population and sampling method Step 4: Choose your data collection methods Step 5: Plan your data collection procedures Step 6: Decide on your data analysis strategies Frequently asked questions Introduction Step 1 Step 2 Step 3 Step 4 Step 5

  15. PDF Chapter Three 3 Qualitative Research Design and Methods 3.1

    The qualitative design is a holistic process of inquiry that seeks "to understand a social or human problem" rather than being "based on testing a theory composed of variables, measured with numbers and analyzed with statistical procedures," as occurs in the quantitative research design (Creswell, 1994, pp. 1-2, own emphasis).

  16. What Is Qualitative Research?

    Qualitative research question examples How does social media shape body image in teenagers? How do children and adults interpret healthy eating in the UK? What factors influence employee retention in a large organization? How is anxiety experienced around the world? How can teachers integrate social issues into science curriculums?

  17. Qualitative Methods

    Their SAGE Research Methods Online and Cases database includes contents from books, articles, encyclopedias, handbooks, and videos covering social science research design and methods including the complete Little Green Book Series of Quantitative Applications in the Social Sciences and the Little Blue Book Series of Qualitative Research ...

  18. PDF How to Design a Qualitative Project and Create A Research Question

    how to design a qualitative project 39 the very real danger of limiting your scope of inquiry. If researchers in the area of cohabi-tation had continued to rest on previous research, they might have failed to see declining stigma associated with cohabitation or that non-married and married cohabitating couples experience many of the same challenges.

  19. 18 Qualitative Research Examples (2023)

    Qualitative Research Examples 1. Ethnography Definition: Ethnography is a qualitative research design aimed at exploring cultural phenomena. Rooted in the discipline of anthropology, this research approach investigates the social interactions, behaviors, and perceptions within groups, communities, or organizations.

  20. Qualitative research design (JARS-Qual)

    Qualitative research design (JARS-Qual) The JARS-Qual guidelines developed in 2018 mark the first time APA Style has included qualitative standards. They outline what should be reported in qualitative research manuscripts to make the review process easier.

  21. Qualitative Study

    Examples of Qualitative Research Approaches Ethnography Ethnography as a research design has its origins in social and cultural anthropology, and involves the researcher being directly immersed in the participant's environment. [2]

  22. Qualitative Designs

    5) in his studies discussed, in a design-build team, the designers are encountered with risks and troubles associated with the estimates of the designs .... In case of design-build, the estimates are however dependent on contracts of designs that are less detailed and at times incomplete as well.... 11 Pages (2750 words) Research Paper.

  23. What Is Research Design? 8 Types + Examples

    No wonder there's confusion! In this article, we'll clear up the confusion. We'll explain the most common research design types for both qualitative and quantitative research projects, whether that is for a full dissertation or thesis, or a smaller research paper or article. Research Design: Quantitative Studies

  24. Eight characteristics of rigorous multilevel implementation research: a

    Researchers from several different disciplines have offered guidelines addressing multilevel research topics. Focusing on quantitative studies, González-Romá and Hernández [] compiled an excellent list of multilevel topics, corresponding recommendations, and references.Topics include when and why multilevel methods are used, developing multilevel hypotheses, deciding between different ...

  25. Qualitative Research Designs and Methods

    Qualitative Research Design Approaches To perform qualitative research, you must choose at least one research design approach that fits your topic. It is not uncommon for a researcher to employ more than one approach throughout their study. Here are five common design approaches: 1. Historical Study