Our websites may use cookies to personalize and enhance your experience. By continuing without changing your cookie settings, you agree to this collection. For more information, please see our University Websites Privacy Notice .

Neag School of Education

Educational Research Basics by Del Siegle

Types of Research

How do we know something exists? There are a numbers of ways of knowing…

  • -Sensory Experience
  • -Agreement with others
  • -Expert Opinion
  • -Scientific Method (we’re using this one)

The Scientific Process (replicable)

  • Identify a problem
  • Clarify the problem
  • Determine what data would help solve the problem
  • Organize the data
  • Interpret the results

General Types of Educational Research

  • Descriptive — survey, historical, content analysis, qualitative (ethnographic, narrative, phenomenological, grounded theory, and case study)
  • Associational — correlational, causal-comparative
  • Intervention — experimental, quasi-experimental, action research (sort of)

Graphic showing images illustrating the text above

Researchers Sometimes Have a Category Called Group Comparison

  • Ex Post Facto (Causal-Comparative): GROUPS ARE ALREADY FORMED
  • Experimental: RANDOM ASSIGNMENT OF INDIVIDUALS
  • Quasi-Experimental: RANDOM ASSIGNMENT OF GROUPS (oversimplified, but fine for now)

General Format of a Research Publication

  • Background of the Problem (ending with a problem statement) — Why is this important to study? What is the problem being investigated?
  • Review of Literature — What do we already know about this problem or situation?
  • Methodology (participants, instruments, procedures) — How was the study conducted? Who were the participants? What data were collected and how?
  • Analysis — What are the results? What did the data indicate?
  • Results — What are the implications of these results? How do they agree or disagree with previous research? What do we still need to learn? What are the limitations of this study?

Del Siegle, PhD [email protected]

Last modified 6/18/2019

  • What is Educational Research? + [Types, Scope & Importance]

busayo.longe

Education is an integral aspect of every society and in a bid to expand the frontiers of knowledge, educational research must become a priority. Educational research plays a vital role in the overall development of pedagogy, learning programs, and policy formulation. 

Educational research is a spectrum that bothers on multiple fields of knowledge and this means that it draws from different disciplines. As a result of this, the findings of this research are multi-dimensional and can be restricted by the characteristics of the research participants and the research environment. 

What is Educational Research?

Educational research is a type of systematic investigation that applies empirical methods to solving challenges in education. It adopts rigorous and well-defined scientific processes in order to gather and analyze data for problem-solving and knowledge advancement. 

J. W. Best defines educational research as that activity that is directed towards the development of a science of behavior in educational situations. The ultimate aim of such a science is to provide knowledge that will permit the educator to achieve his goals through the most effective methods.

The primary purpose of educational research is to expand the existing body of knowledge by providing solutions to different problems in pedagogy while improving teaching and learning practices. Educational researchers also seek answers to questions bothering on learner motivation, development, and classroom management. 

Characteristics of Education Research  

While educational research can take numerous forms and approaches, several characteristics define its process and approach. Some of them are listed below:

  • It sets out to solve a specific problem.
  • Educational research adopts primary and secondary research methods in its data collection process . This means that in educational research, the investigator relies on first-hand sources of information and secondary data to arrive at a suitable conclusion. 
  • Educational research relies on empirical evidence . This results from its largely scientific approach.
  • Educational research is objective and accurate because it measures verifiable information.
  • In educational research, the researcher adopts specific methodologies, detailed procedures, and analysis to arrive at the most objective responses
  • Educational research findings are useful in the development of principles and theories that provide better insights into pressing issues.
  • This research approach combines structured, semi-structured, and unstructured questions to gather verifiable data from respondents.
  • Many educational research findings are documented for peer review before their presentation. 
  • Educational research is interdisciplinary in nature because it draws from different fields and studies complex factual relations.

Types of Educational Research 

Educational research can be broadly categorized into 3 which are descriptive research , correlational research , and experimental research . Each of these has distinct and overlapping features. 

Descriptive Educational Research

In this type of educational research, the researcher merely seeks to collect data with regards to the status quo or present situation of things. The core of descriptive research lies in defining the state and characteristics of the research subject being understudied. 

Because of its emphasis on the “what” of the situation, descriptive research can be termed an observational research method . In descriptive educational research, the researcher makes use of quantitative research methods including surveys and questionnaires to gather the required data.

Typically, descriptive educational research is the first step in solving a specific problem. Here are a few examples of descriptive research: 

  • A reading program to help you understand student literacy levels.
  • A study of students’ classroom performance.
  • Research to gather data on students’ interests and preferences. 

From these examples, you would notice that the researcher does not need to create a simulation of the natural environment of the research subjects; rather, he or she observes them as they engage in their routines. Also, the researcher is not concerned with creating a causal relationship between the research variables. 

Correlational Educational Research

This is a type of educational research that seeks insights into the statistical relationship between two research variables. In correlational research, the researcher studies two variables intending to establish a connection between them. 

Correlational research can be positive, negative, or non-existent. Positive correlation occurs when an increase in variable A leads to an increase in variable B, while negative correlation occurs when an increase in variable A results in a decrease in variable B. 

When a change in any of the variables does not trigger a succeeding change in the other, then the correlation is non-existent. Also, in correlational educational research, the research does not need to alter the natural environment of the variables; that is, there is no need for external conditioning. 

Examples of educational correlational research include: 

  • Research to discover the relationship between students’ behaviors and classroom performance.
  • A study into the relationship between students’ social skills and their learning behaviors. 

Experimental Educational Research

Experimental educational research is a research approach that seeks to establish the causal relationship between two variables in the research environment. It adopts quantitative research methods in order to determine the cause and effect in terms of the research variables being studied. 

Experimental educational research typically involves two groups – the control group and the experimental group. The researcher introduces some changes to the experimental group such as a change in environment or a catalyst, while the control group is left in its natural state. 

The introduction of these catalysts allows the researcher to determine the causative factor(s) in the experiment. At the core of experimental educational research lies the formulation of a hypothesis and so, the overall research design relies on statistical analysis to approve or disprove this hypothesis.

Examples of Experimental Educational Research

  • A study to determine the best teaching and learning methods in a school.
  • A study to understand how extracurricular activities affect the learning process. 

Based on functionality, educational research can be classified into fundamental research , applied research , and action research. The primary purpose of fundamental research is to provide insights into the research variables; that is, to gain more knowledge. Fundamental research does not solve any specific problems. 

Just as the name suggests, applied research is a research approach that seeks to solve specific problems. Findings from applied research are useful in solving practical challenges in the educational sector such as improving teaching methods, modifying learning curricula, and simplifying pedagogy. 

Action research is tailored to solve immediate problems that are specific to a context such as educational challenges in a local primary school. The goal of action research is to proffer solutions that work in this context and to solve general or universal challenges in the educational sector. 

Importance of Educational Research

  • Educational research plays a crucial role in knowledge advancement across different fields of study. 
  • It provides answers to practical educational challenges using scientific methods.
  • Findings from educational research; especially applied research, are instrumental in policy reformulation. 
  • For the researcher and other parties involved in this research approach, educational research improves learning, knowledge, skills, and understanding.
  • Educational research improves teaching and learning methods by empowering you with data to help you teach and lead more strategically and effectively.
  • Educational research helps students apply their knowledge to practical situations.

Educational Research Methods 

  • Surveys/Questionnaires

A survey is a research method that is used to collect data from a predetermined audience about a specific research context. It usually consists of a set of standardized questions that help you to gain insights into the experiences, thoughts, and behaviors of the audience. 

Surveys can be administered physically using paper forms, face-to-face conversations, telephone conversations, or online forms. Online forms are easier to administer because they help you to collect accurate data and to also reach a larger sample size. Creating your online survey on data-gathering platforms like Formplus allows you to.also analyze survey respondent’s data easily. 

In order to gather accurate data via your survey, you must first identify the research context and the research subjects that would make up your data sample size. Next, you need to choose an online survey tool like Formplus to help you create and administer your survey with little or no hassles. 

An interview is a qualitative data collection method that helps you to gather information from respondents by asking questions in a conversation. It is typically a face-to-face conversation with the research subjects in order to gather insights that will prove useful to the specific research context. 

Interviews can be structured, semi-structured , or unstructured . A structured interview is a type of interview that follows a premeditated sequence; that is, it makes use of a set of standardized questions to gather information from the research subjects. 

An unstructured interview is a type of interview that is fluid; that is, it is non-directive. During a structured interview, the researcher does not make use of a set of predetermined questions rather, he or she spontaneously asks questions to gather relevant data from the respondents. 

A semi-structured interview is the mid-point between structured and unstructured interviews. Here, the researcher makes use of a set of standardized questions yet, he or she still makes inquiries outside these premeditated questions as dedicated by the flow of the conversations in the research context. 

Data from Interviews can be collected using audio recorders, digital cameras, surveys, and questionnaires. 

  • Observation

Observation is a method of data collection that entails systematically selecting, watching, listening, reading, touching, and recording behaviors and characteristics of living beings, objects, or phenomena. In the classroom, teachers can adopt this method to understand students’ behaviors in different contexts. 

Observation can be qualitative or quantitative in approach . In quantitative observation, the researcher aims at collecting statistical information from respondents and in qualitative information, the researcher aims at collecting qualitative data from respondents. 

Qualitative observation can further be classified into participant or non-participant observation. In participant observation, the researcher becomes a part of the research environment and interacts with the research subjects to gather info about their behaviors. In non-participant observation, the researcher does not actively take part in the research environment; that is, he or she is a passive observer. 

How to Create Surveys and Questionnaires with Formplus

  • On your dashboard, choose the “create new form” button to access the form builder. You can also choose from the available survey templates and modify them to suit your need.
  • Save your online survey to access the form customization section. Here, you can change the physical appearance of your form by adding preferred background images and inserting your organization’s logo.
  • Formplus has a form analytics dashboard that allows you to view insights from your data collection process such as the total number of form views and form submissions. You can also use the reports summary tool to generate custom graphs and charts from your survey data. 

Steps in Educational Research

Like other types of research, educational research involves several steps. Following these steps allows the researcher to gather objective information and arrive at valid findings that are useful to the research context. 

  • Define the research problem clearly. 
  • Formulate your hypothesis. A hypothesis is the researcher’s reasonable guess based on the available evidence, which he or she seeks to prove in the course of the research.
  • Determine the methodology to be adopted. Educational research methods include interviews, surveys, and questionnaires.
  • Collect data from the research subjects using one or more educational research methods. You can collect research data using Formplus forms.
  • Analyze and interpret your data to arrive at valid findings. In the Formplus analytics dashboard, you can view important data collection insights and you can also create custom visual reports with the reports summary tool. 
  • Create your research report. A research report details the entire process of the systematic investigation plus the research findings. 

Conclusion 

Educational research is crucial to the overall advancement of different fields of study and learning, as a whole. Data in educational research can be gathered via surveys and questionnaires, observation methods, or interviews – structured, unstructured, and semi-structured. 

You can create a survey/questionnaire for educational research with Formplu s. As a top-tier data tool, Formplus makes it easy for you to create your educational research survey in the drag-and-drop form builder, and share this with survey respondents using one or more of the form sharing options. 

Logo

Connect to Formplus, Get Started Now - It's Free!

  • education research
  • educational research types
  • examples of educational research
  • importance of educational research
  • purpose of educational research
  • busayo.longe

Formplus

You may also like:

What is Pure or Basic Research? + [Examples & Method]

Simple guide on pure or basic research, its methods, characteristics, advantages, and examples in science, medicine, education and psychology

types of research studies in education

User Research: Definition, Methods, Tools and Guide

In this article, you’ll learn to provide value to your target market with user research. As a bonus, we’ve added user research tools and...

Assessment Tools: Types, Examples & Importance

In this article, you’ll learn about different assessment tools to help you evaluate performance in various contexts

Goodhart’s Law: Definition, Implications & Examples

In this article, we will discuss Goodhart’s law in different fields, especially in survey research, and how you can avoid it.

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

types of research studies in education

  • AERA Leadership
  • AERA Past Presidents
  • AERA By The Numbers
  • What is Education Research?
  • Division Descriptions
  • Resources for Division Officers
  • SIG Directory
  • Resources for SIG Officers
  • Consortium of University and Research Institutions (CURI)
  • GSC Welcome Message
  • GSC Annual Meeting
  • GSC Campus Representatives
  • GSC Newsletters
  • AERA GSC Online Library of Resources
  • GSC Officers & Representatives
  • GSC Elections
  • GSC Paper Submission Tips & Examples
  • GSC Meeting Minutes and Announcements
  • Communications
  • Education Research & Research Policy
  • Government Relations
  • Professional Development and Training
  • Social Justice
  • Committee on Scholars of Color in Education Awards
  • Distinguished Contributions to Gender Equity in Education Research Award
  • Distinguished Contributions to Research in Education Award
  • Distinguished Public Service Award
  • Early Career Award
  • E. F. Lindquist Award
  • Excellence In Media Reporting On Education Research Award
  • Exemplary Contributions to Practice-Engaged Research Award
  • Outstanding Book Award
  • Outstanding Public Communication of Education Research Award
  • Palmer O. Johnson Memorial Award
  • Review of Research Award
  • Social Justice in Education Award
  • Presidential Citation
  • 2022-Division-Awards
  • AERA Council & Executive Board
  • Standing Committees
  • Awards Committees
  • Professional Ethics
  • Association Policies
  • Position Statements
  • AERA Centennial

types of research studies in education

Share 

Book cover

Education Scholarship in Healthcare pp 13–23 Cite as

Introduction to Education Research

  • Sharon K. Park 3 ,
  • Khanh-Van Le-Bucklin 4 &
  • Julie Youm 4  
  • First Online: 29 November 2023

223 Accesses

Educators rely on the discovery of new knowledge of teaching practices and frameworks to improve and evolve education for trainees. An important consideration that should be made when embarking on a career conducting education research is finding a scholarship niche. An education researcher can then develop the conceptual framework that describes the state of knowledge, realize gaps in understanding of the phenomenon or problem, and develop an outline for the methodological underpinnings of the research project. In response to Ernest Boyer’s seminal report, Priorities of the Professoriate , research was conducted about the criteria and decision processes for grants and publications. Six standards known as the Glassick’s criteria provide a tangible measure by which educators can assess the quality and structure of their education research—clear goals, adequate preparation, appropriate methods, significant results, effective presentation, and reflective critique. Ultimately, the promise of education research is to realize advances and innovation for learners that are informed by evidence-based knowledge and practices.

  • Scholarship
  • Glassick’s criteria

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Boyer EL. Scholarship reconsidered: priorities of the professoriate. Princeton: Carnegie Foundation for the Advancement of Teaching; 1990.

Google Scholar  

Munoz-Najar Galvez S, Heiberger R, McFarland D. Paradigm wars revisited: a cartography of graduate research in the field of education (1980–2010). Am Educ Res J. 2020;57(2):612–52.

Article   Google Scholar  

Ringsted C, Hodges B, Scherpbier A. ‘The research compass’: an introduction to research in medical education: AMEE Guide no. 56. Med Teach. 2011;33(9):695–709.

Article   PubMed   Google Scholar  

Bordage G. Conceptual frameworks to illuminate and magnify. Med Educ. 2009;43(4):312–9.

Varpio L, Paradis E, Uijtdehaage S, Young M. The distinctions between theory, theoretical framework, and conceptual framework. Acad Med. 2020;95(7):989–94.

Ravitch SM, Riggins M. Reason & Rigor: how conceptual frameworks guide research. Thousand Oaks: Sage Publications; 2017.

Park YS, Zaidi Z, O'Brien BC. RIME foreword: what constitutes science in educational research? Applying rigor in our research approaches. Acad Med. 2020;95(11S):S1–5.

National Institute of Allergy and Infectious Diseases. Writing a winning application—You’re your niche. 2020a. https://www.niaid.nih.gov/grants-contracts/find-your-niche . Accessed 23 Jan 2022.

National Institute of Allergy and Infectious Diseases. Writing a winning application—conduct a self-assessment. 2020b. https://www.niaid.nih.gov/grants-contracts/winning-app-self-assessment . Accessed 23 Jan 2022.

Glassick CE, Huber MT, Maeroff GI. Scholarship assessed: evaluation of the professoriate. San Francisco: Jossey Bass; 1997.

Simpson D, Meurer L, Braza D. Meeting the scholarly project requirement-application of scholarship criteria beyond research. J Grad Med Educ. 2012;4(1):111–2. https://doi.org/10.4300/JGME-D-11-00310.1 .

Article   PubMed   PubMed Central   Google Scholar  

Fincher RME, Simpson DE, Mennin SP, Rosenfeld GC, Rothman A, McGrew MC et al. The council of academic societies task force on scholarship. Scholarship in teaching: an imperative for the 21st century. Academic Medicine. 2000;75(9):887–94.

Hutchings P, Shulman LS. The scholarship of teaching new elaborations and developments. Change. 1999;11–5.

Download references

Author information

Authors and affiliations.

School of Pharmacy, Notre Dame of Maryland University, Baltimore, MD, USA

Sharon K. Park

University of California, Irvine School of Medicine, Irvine, CA, USA

Khanh-Van Le-Bucklin & Julie Youm

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Sharon K. Park .

Editor information

Editors and affiliations.

Johns Hopkins University School of Medicine, Baltimore, MD, USA

April S. Fitzgerald

Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA

Gundula Bosch

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Cite this chapter.

Park, S.K., Le-Bucklin, KV., Youm, J. (2023). Introduction to Education Research. In: Fitzgerald, A.S., Bosch, G. (eds) Education Scholarship in Healthcare. Springer, Cham. https://doi.org/10.1007/978-3-031-38534-6_2

Download citation

DOI : https://doi.org/10.1007/978-3-031-38534-6_2

Published : 29 November 2023

Publisher Name : Springer, Cham

Print ISBN : 978-3-031-38533-9

Online ISBN : 978-3-031-38534-6

eBook Packages : Medicine Medicine (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • Subject List
  • Take a Tour
  • For Authors
  • Subscriber Services
  • Publications
  • African American Studies
  • African Studies
  • American Literature
  • Anthropology
  • Architecture Planning and Preservation
  • Art History
  • Atlantic History
  • Biblical Studies
  • British and Irish Literature
  • Childhood Studies
  • Chinese Studies
  • Cinema and Media Studies
  • Communication
  • Criminology
  • Environmental Science
  • Evolutionary Biology
  • International Law
  • International Relations
  • Islamic Studies
  • Jewish Studies
  • Latin American Studies
  • Latino Studies
  • Linguistics
  • Literary and Critical Theory
  • Medieval Studies
  • Military History
  • Political Science
  • Public Health
  • Renaissance and Reformation
  • Social Work
  • Urban Studies
  • Victorian Literature
  • Browse All Subjects

How to Subscribe

  • Free Trials

In This Article Expand or collapse the "in this article" section Methodologies for Conducting Education Research

Introduction, general overviews.

  • Experimental Research
  • Quasi-Experimental Research
  • Hierarchical Linear Modeling
  • Survey Research
  • Assessment and Measurement
  • Qualitative Research Methodologies
  • Program Evaluation
  • Research Syntheses
  • Implementation

Related Articles Expand or collapse the "related articles" section about

About related articles close popup.

Lorem Ipsum Sit Dolor Amet

Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Aliquam ligula odio, euismod ut aliquam et, vestibulum nec risus. Nulla viverra, arcu et iaculis consequat, justo diam ornare tellus, semper ultrices tellus nunc eu tellus.

  • Action Research in Education
  • Data Collection in Educational Research
  • Educational Assessment
  • Educational Statistics for Longitudinal Research
  • Grounded Theory
  • Literature Reviews
  • Meta-Analysis and Research Synthesis in Education
  • Mixed Methods Research
  • Multivariate Research Methodology
  • Narrative Research in Education
  • Performance Objectives and Measurement
  • Performance-based Research Assessment in Higher Education
  • Qualitative Research Design
  • Quantitative Research Designs in Educational Research
  • Single-Subject Research Design
  • Social Network Analysis
  • Social Science and Education Research
  • Statistical Assumptions

Other Subject Areas

Forthcoming articles expand or collapse the "forthcoming articles" section.

  • Gender, Power, and Politics in the Academy
  • Girls' Education in the Developing World
  • Non-Formal & Informal Environmental Education
  • Find more forthcoming articles...
  • Export Citations
  • Share This Facebook LinkedIn Twitter

Methodologies for Conducting Education Research by Marisa Cannata LAST REVIEWED: 19 August 2020 LAST MODIFIED: 15 December 2011 DOI: 10.1093/obo/9780199756810-0061

Education is a diverse field and methodologies used in education research are necessarily diverse. The reasons for the methodological diversity of education research are many, including the fact that the field of education is composed of a multitude of disciplines and tensions between basic and applied research. For example, accepted methods of systemic inquiry in history, sociology, economics, and psychology vary, yet all of these disciplines help answer important questions posed in education. This methodological diversity has led to debates about the quality of education research and the perception of shifting standards of quality research. The citations selected for inclusion in this article provide a broad overview of methodologies and discussions of quality research standards across the different types of questions posed in educational research. The citations represent summaries of ongoing debates, articles or books that have had a significant influence on education research, and guides to those who wish to implement particular methodologies. Most of the sections focus on specific methodologies and provide advice or examples for studies employing these methodologies.

The interdisciplinary nature of education research has implications for education research. There is no single best research design for all questions that guide education research. Even through many often heated debates about methodologies, the common strand is that research designs should follow the research questions. The following works offer an introduction to the debates, divides, and difficulties of education research. Schoenfeld 1999 , Mitchell and Haro 1999 , and Shulman 1988 provide perspectives on diversity within the field of education and the implications of this diversity on the debates about education research and difficulties conducting such research. National Research Council 2002 outlines the principles of scientific inquiry and how they apply to education. Published around the time No Child Left Behind required education policies to be based on scientific research, this book laid the foundation for much of the current emphasis of experimental and quasi-experimental research in education. To read another perspective on defining good education research, readers may turn to Hostetler 2005 . Readers who want a general overview of various methodologies in education research and directions on how to choose between them should read Creswell 2009 and Green, et al. 2006 . The American Educational Research Association (AERA), the main professional association focused on education research, has developed standards for how to report methods and findings in empirical studies. Those wishing to follow those standards should consult American Educational Research Association 2006 .

American Educational Research Association. 2006. Standards for reporting on empirical social science research in AERA publications. Educational Researcher 35.6: 33–40.

DOI: 10.3102/0013189X035006033

The American Educational Research Association is the professional association for researchers in education. Publications by AERA are a well-regarded source of research. This article outlines the requirements for reporting original research in AERA publications.

Creswell, J. W. 2009. Research design: Qualitative, quantitative, and mixed methods approaches . 3d ed. Los Angeles: SAGE.

Presents an overview of qualitative, quantitative and mixed-methods research designs, including how to choose the design based on the research question. This book is particularly helpful for those who want to design mixed-methods studies.

Green, J. L., G. Camilli, and P. B. Elmore. 2006. Handbook of complementary methods for research in education . Mahwah, NJ: Lawrence Erlbaum.

Provides a broad overview of several methods of educational research. The first part provides an overview of issues that cut across specific methodologies, and subsequent chapters delve into particular research approaches.

Hostetler, K. 2005. What is “good” education research? Educational Researcher 34.6: 16–21.

DOI: 10.3102/0013189X034006016

Goes beyond methodological concerns to argue that “good” educational research should also consider the conception of human well-being. By using a philosophical lens on debates about quality education research, this article is useful for moving beyond qualitative-quantitative divides.

Mitchell, T. R., and A. Haro. 1999. Poles apart: Reconciling the dichotomies in education research. In Issues in education research . Edited by E. C. Lagemann and L. S. Shulman, 42–62. San Francisco: Jossey-Bass.

Chapter outlines several dichotomies in education research, including the tension between applied research and basic research and between understanding the purposes of education and the processes of education.

National Research Council. 2002. Scientific research in education . Edited by R. J. Shavelson and L. Towne. Committee on Scientific Principles for Education Research. Center for Education. Division of Behavioral and Social Sciences and Education. Washington, DC: National Academy Press.

This book was released around the time the No Child Left Behind law directed that policy decisions should be guided by scientific research. It is credited with starting the current debate about methods in educational research and the preference for experimental studies.

Schoenfeld, A. H. 1999. The core, the canon, and the development of research skills. Issues in the preparation of education researchers. In Issues in education research . Edited by E. C. Lagemann and L. S. Shulman, 166–202. San Francisco: Jossey-Bass.

Describes difficulties in preparing educational researchers due to the lack of a core and a canon in education. While the focus is on preparing researchers, it provides valuable insight into why debates over education research persist.

Shulman, L. S. 1988. Disciplines of inquiry in education: An overview. In Complementary methods for research in education . Edited by R. M. Jaeger, 3–17. Washington, DC: American Educational Research Association.

Outlines what distinguishes research from other modes of disciplined inquiry and the relationship between academic disciplines, guiding questions, and methods of inquiry.

back to top

Users without a subscription are not able to see the full content on this page. Please subscribe or login .

Oxford Bibliographies Online is available by subscription and perpetual access to institutions. For more information or to contact an Oxford Sales Representative click here .

  • About Education »
  • Meet the Editorial Board »
  • Academic Achievement
  • Academic Audit for Universities
  • Academic Freedom and Tenure in the United States
  • Adjuncts in Higher Education in the United States
  • Administrator Preparation
  • Adolescence
  • Advanced Placement and International Baccalaureate Courses
  • Advocacy and Activism in Early Childhood
  • African American Racial Identity and Learning
  • Alaska Native Education
  • Alternative Certification Programs for Educators
  • Alternative Schools
  • American Indian Education
  • Animals in Environmental Education
  • Art Education
  • Artificial Intelligence and Learning
  • Assessing School Leader Effectiveness
  • Assessment, Behavioral
  • Assessment, Educational
  • Assessment in Early Childhood Education
  • Assistive Technology
  • Augmented Reality in Education
  • Beginning-Teacher Induction
  • Bilingual Education and Bilingualism
  • Black Undergraduate Women: Critical Race and Gender Perspe...
  • Blended Learning
  • Case Study in Education Research
  • Changing Professional and Academic Identities
  • Character Education
  • Children’s and Young Adult Literature
  • Children's Beliefs about Intelligence
  • Children's Rights in Early Childhood Education
  • Citizenship Education
  • Civic and Social Engagement of Higher Education
  • Classroom Learning Environments: Assessing and Investigati...
  • Classroom Management
  • Coherent Instructional Systems at the School and School Sy...
  • College Admissions in the United States
  • College Athletics in the United States
  • Community Relations
  • Comparative Education
  • Computer-Assisted Language Learning
  • Computer-Based Testing
  • Conceptualizing, Measuring, and Evaluating Improvement Net...
  • Continuous Improvement and "High Leverage" Educational Pro...
  • Counseling in Schools
  • Critical Approaches to Gender in Higher Education
  • Critical Perspectives on Educational Innovation and Improv...
  • Critical Race Theory
  • Crossborder and Transnational Higher Education
  • Cross-National Research on Continuous Improvement
  • Cross-Sector Research on Continuous Learning and Improveme...
  • Cultural Diversity in Early Childhood Education
  • Culturally Responsive Leadership
  • Culturally Responsive Pedagogies
  • Culturally Responsive Teacher Education in the United Stat...
  • Curriculum Design
  • Data-driven Decision Making in the United States
  • Deaf Education
  • Desegregation and Integration
  • Design Thinking and the Learning Sciences: Theoretical, Pr...
  • Development, Moral
  • Dialogic Pedagogy
  • Digital Age Teacher, The
  • Digital Citizenship
  • Digital Divides
  • Disabilities
  • Distance Learning
  • Distributed Leadership
  • Doctoral Education and Training
  • Early Childhood Education and Care (ECEC) in Denmark
  • Early Childhood Education and Development in Mexico
  • Early Childhood Education in Aotearoa New Zealand
  • Early Childhood Education in Australia
  • Early Childhood Education in China
  • Early Childhood Education in Europe
  • Early Childhood Education in Sub-Saharan Africa
  • Early Childhood Education in Sweden
  • Early Childhood Education Pedagogy
  • Early Childhood Education Policy
  • Early Childhood Education, The Arts in
  • Early Childhood Mathematics
  • Early Childhood Science
  • Early Childhood Teacher Education
  • Early Childhood Teachers in Aotearoa New Zealand
  • Early Years Professionalism and Professionalization Polici...
  • Economics of Education
  • Education For Children with Autism
  • Education for Sustainable Development
  • Education Leadership, Empirical Perspectives in
  • Education of Native Hawaiian Students
  • Education Reform and School Change
  • Educator Partnerships with Parents and Families with a Foc...
  • Emotional and Affective Issues in Environmental and Sustai...
  • Emotional and Behavioral Disorders
  • Environmental and Science Education: Overlaps and Issues
  • Environmental Education
  • Environmental Education in Brazil
  • Epistemic Beliefs
  • Equity and Improvement: Engaging Communities in Educationa...
  • Equity, Ethnicity, Diversity, and Excellence in Education
  • Ethical Research with Young Children
  • Ethics and Education
  • Ethics of Teaching
  • Ethnic Studies
  • Evidence-Based Communication Assessment and Intervention
  • Family and Community Partnerships in Education
  • Family Day Care
  • Federal Government Programs and Issues
  • Feminization of Labor in Academia
  • Finance, Education
  • Financial Aid
  • Formative Assessment
  • Future-Focused Education
  • Gender and Achievement
  • Gender and Alternative Education
  • Gender-Based Violence on University Campuses
  • Gifted Education
  • Global Mindedness and Global Citizenship Education
  • Global University Rankings
  • Governance, Education
  • Growth of Effective Mental Health Services in Schools in t...
  • Higher Education and Globalization
  • Higher Education and the Developing World
  • Higher Education Faculty Characteristics and Trends in the...
  • Higher Education Finance
  • Higher Education Governance
  • Higher Education Graduate Outcomes and Destinations
  • Higher Education in Africa
  • Higher Education in China
  • Higher Education in Latin America
  • Higher Education in the United States, Historical Evolutio...
  • Higher Education, International Issues in
  • Higher Education Management
  • Higher Education Policy
  • Higher Education Research
  • Higher Education Student Assessment
  • High-stakes Testing
  • History of Early Childhood Education in the United States
  • History of Education in the United States
  • History of Technology Integration in Education
  • Homeschooling
  • Inclusion in Early Childhood: Difference, Disability, and ...
  • Inclusive Education
  • Indigenous Education in a Global Context
  • Indigenous Learning Environments
  • Indigenous Students in Higher Education in the United Stat...
  • Infant and Toddler Pedagogy
  • Inservice Teacher Education
  • Integrating Art across the Curriculum
  • Intelligence
  • Intensive Interventions for Children and Adolescents with ...
  • International Perspectives on Academic Freedom
  • Intersectionality and Education
  • Knowledge Development in Early Childhood
  • Leadership Development, Coaching and Feedback for
  • Leadership in Early Childhood Education
  • Leadership Training with an Emphasis on the United States
  • Learning Analytics in Higher Education
  • Learning Difficulties
  • Learning, Lifelong
  • Learning, Multimedia
  • Learning Strategies
  • Legal Matters and Education Law
  • LGBT Youth in Schools
  • Linguistic Diversity
  • Linguistically Inclusive Pedagogy
  • Literacy Development and Language Acquisition
  • Mathematics Identity
  • Mathematics Instruction and Interventions for Students wit...
  • Mathematics Teacher Education
  • Measurement for Improvement in Education
  • Measurement in Education in the United States
  • Methodological Approaches for Impact Evaluation in Educati...
  • Methodologies for Conducting Education Research
  • Mindfulness, Learning, and Education
  • Motherscholars
  • Multiliteracies in Early Childhood Education
  • Multiple Documents Literacy: Theory, Research, and Applica...
  • Museums, Education, and Curriculum
  • Music Education
  • Native American Studies
  • Note-Taking
  • Numeracy Education
  • One-to-One Technology in the K-12 Classroom
  • Online Education
  • Open Education
  • Organizing for Continuous Improvement in Education
  • Organizing Schools for the Inclusion of Students with Disa...
  • Outdoor Play and Learning
  • Outdoor Play and Learning in Early Childhood Education
  • Pedagogical Leadership
  • Pedagogy of Teacher Education, A
  • Performance-based Research Funding
  • Phenomenology in Educational Research
  • Philosophy of Education
  • Physical Education
  • Podcasts in Education
  • Policy Context of United States Educational Innovation and...
  • Politics of Education
  • Portable Technology Use in Special Education Programs and ...
  • Post-humanism and Environmental Education
  • Pre-Service Teacher Education
  • Problem Solving
  • Productivity and Higher Education
  • Professional Development
  • Professional Learning Communities
  • Programs and Services for Students with Emotional or Behav...
  • Psychology Learning and Teaching
  • Psychometric Issues in the Assessment of English Language ...
  • Qualitative Data Analysis Techniques
  • Qualitative, Quantitative, and Mixed Methods Research Samp...
  • Queering the English Language Arts (ELA) Writing Classroom
  • Race and Affirmative Action in Higher Education
  • Reading Education
  • Refugee and New Immigrant Learners
  • Relational and Developmental Trauma and Schools
  • Relational Pedagogies in Early Childhood Education
  • Reliability in Educational Assessments
  • Religion in Elementary and Secondary Education in the Unit...
  • Researcher Development and Skills Training within the Cont...
  • Research-Practice Partnerships in Education within the Uni...
  • Response to Intervention
  • Restorative Practices
  • Risky Play in Early Childhood Education
  • Scale and Sustainability of Education Innovation and Impro...
  • Scaling Up Research-based Educational Practices
  • School Accreditation
  • School Choice
  • School Culture
  • School District Budgeting and Financial Management in the ...
  • School Improvement through Inclusive Education
  • School Reform
  • Schools, Private and Independent
  • School-Wide Positive Behavior Support
  • Science Education
  • Secondary to Postsecondary Transition Issues
  • Self-Regulated Learning
  • Self-Study of Teacher Education Practices
  • Service-Learning
  • Severe Disabilities
  • Single Salary Schedule
  • Single-sex Education
  • Social Context of Education
  • Social Justice
  • Social Pedagogy
  • Social Studies Education
  • Sociology of Education
  • Standards-Based Education
  • Student Access, Equity, and Diversity in Higher Education
  • Student Assignment Policy
  • Student Engagement in Tertiary Education
  • Student Learning, Development, Engagement, and Motivation ...
  • Student Participation
  • Student Voice in Teacher Development
  • Sustainability Education in Early Childhood Education
  • Sustainability in Early Childhood Education
  • Sustainability in Higher Education
  • Teacher Beliefs and Epistemologies
  • Teacher Collaboration in School Improvement
  • Teacher Evaluation and Teacher Effectiveness
  • Teacher Preparation
  • Teacher Training and Development
  • Teacher Unions and Associations
  • Teacher-Student Relationships
  • Teaching Critical Thinking
  • Technologies, Teaching, and Learning in Higher Education
  • Technology Education in Early Childhood
  • Technology, Educational
  • Technology-based Assessment
  • The Bologna Process
  • The Regulation of Standards in Higher Education
  • Theories of Educational Leadership
  • Three Conceptions of Literacy: Media, Narrative, and Gamin...
  • Tracking and Detracking
  • Traditions of Quality Improvement in Education
  • Transformative Learning
  • Transitions in Early Childhood Education
  • Tribally Controlled Colleges and Universities in the Unite...
  • Understanding the Psycho-Social Dimensions of Schools and ...
  • University Faculty Roles and Responsibilities in the Unite...
  • Using Ethnography in Educational Research
  • Value of Higher Education for Students and Other Stakehold...
  • Virtual Learning Environments
  • Vocational and Technical Education
  • Wellness and Well-Being in Education
  • Women's and Gender Studies
  • Young Children and Spirituality
  • Young Children's Learning Dispositions
  • Young Children's Working Theories
  • Privacy Policy
  • Cookie Policy
  • Legal Notice
  • Accessibility

Powered by:

  • [66.249.64.20|81.177.182.154]
  • 81.177.182.154

National Academies Press: OpenBook

Scientific Research in Education (2002)

Chapter: 5 designs for the conduct of scientific research in education, 5 designs for the conduct of scientific research in education.

The salient features of education delineated in Chapter 4 and the guiding principles of scientific research laid out in Chapter 3 set boundaries for the design and conduct of scientific education research. Thus, the design of a study (e.g., randomized experiment, ethnography, multiwave survey) does not itself make it scientific. However, if the design directly addresses a question that can be addressed empirically, is linked to prior research and relevant theory, is competently implemented in context, logically links the findings to interpretation ruling out counterinterpretations, and is made accessible to scientific scrutiny, it could then be considered scientific. That is: Is there a clear set of questions underlying the design? Are the methods appropriate to answer the questions and rule out competing answers? Does the study take previous research into account? Is there a conceptual basis? Are data collected in light of local conditions and analyzed systematically? Is the study clearly described and made available for criticism? The more closely aligned it is with these principles, the higher the quality of the scientific study. And the particular features of education require that the research process be explicitly designed to anticipate the implications of these features and to model and plan accordingly.

RESEARCH DESIGN

Our scientific principles include research design—the subject of this chapter—as but one aspect of a larger process of rigorous inquiry. How-

ever, research design (and corresponding scientific methods) is a crucial aspect of science. It is also the subject of much debate in many fields, including education. In this chapter, we describe some of the most frequently used and trusted designs for scientifically addressing broad classes of research questions in education.

In doing so, we develop three related themes. First, as we posit earlier, a variety of legitimate scientific approaches exist in education research. Therefore, the description of methods discussed in this chapter is illustrative of a range of trusted approaches; it should not be taken as an authoritative list of tools to the exclusion of any others. 1 As we stress in earlier chapters, the history of science has shown that research designs evolve, as do the questions they address, the theories they inform, and the overall state of knowledge.

Second, we extend the argument we make in Chapter 3 that designs and methods must be carefully selected and implemented to best address the question at hand. Some methods are better than others for particular purposes, and scientific inferences are constrained by the type of design employed. Methods that may be appropriate for estimating the effect of an educational intervention, for example, would rarely be appropriate for use in estimating dropout rates. While researchers—in education or any other field—may overstate the conclusions from an inquiry, the strength of scientific inference must be judged in terms of the design used to address the question under investigation. A comprehensive explication of a hierarchy of appropriate designs and analytic approaches under various conditions would require a depth of treatment found in research methods textbooks. This is not our objective. Rather, our goal is to illustrate that among available techniques, certain designs are better suited to address particular kinds of questions under particular conditions than others.

Third, in order to generate a rich source of scientific knowledge in education that is refined and revised over time, different types of inquiries and methods are required. At any time, the types of questions and methods depend in large part on an accurate assessment of the overall state of knowl-

edge and professional judgment about how a particular line of inquiry could advance understanding. In areas with little prior knowledge, for example, research will generally need to involve careful description to formulate initial ideas. In such situations, descriptive studies might be undertaken to help bring education problems or trends into sharper relief or to generate plausible theories about the underlying structure of behavior or learning. If the effects of education programs that have been implemented on a large scale are to be understood, however, investigations must be designed to test a set of causal hypotheses. Thus, while we treat the topic of design in this chapter as applying to individual studies, research design has a broader quality as it relates to lines of inquiry that develop over time.

While a full development of these notions goes considerably beyond our charge, we offer this brief overview to place the discussion of methods that follows into perspective. Also, in the concluding section of this chapter, we make a few targeted suggestions for the kinds of work we believe are most needed in education research to make further progress toward robust knowledge.

TYPES OF RESEARCH QUESTIONS

In discussing design, we have to be true to our admonition that the research question drives the design, not vice versa. To simplify matters, the committee recognized that a great number of education research questions fall into three (interrelated) types: description—What is happening? cause—Is there a systematic effect? and process or mechanism—Why or how is it happening?

The first question—What is happening?—invites description of various kinds, so as to properly characterize a population of students, understand the scope and severity of a problem, develop a theory or conjecture, or identify changes over time among different educational indicators—for example, achievement, spending, or teacher qualifications. Description also can include associations among variables, such as the characteristics of schools (e.g., size, location, economic base) that are related to (say) the provision of music and art instruction. The second question is focused on establishing causal effects: Does x cause y ? The search for cause, for example,

can include seeking to understand the effect of teaching strategies on student learning or state policy changes on district resource decisions. The third question confronts the need to understand the mechanism or process by which x causes y . Studies that seek to model how various parts of a complex system—like U.S. education—fit together help explain the conditions that facilitate or impede change in teaching, learning, and schooling. Within each type of question, we separate the discussion into subsections that show the use of different methods given more fine-grained goals and conditions of an inquiry.

Although for ease of discussion we treat these types of questions separately, in practice they are closely related. As our examples show, within particular studies, several kinds of queries can be addressed. Furthermore, various genres of scientific education research often address more than one of these types of questions. Evaluation research—the rigorous and systematic evaluation of an education program or policy—exemplifies the use of multiple questions and corresponding designs. As applied in education, this type of scientific research is distinguished from other scientific research by its purpose: to contribute to program improvement (Weiss, 1998a). Evaluation often entails an assessment of whether the program caused improvements in the outcome or outcomes of interest (Is there a systematic effect?). It also can involve detailed descriptions of the way the program is implemented in practice and in what contexts ( What is happening? ) and the ways that program services influence outcomes (How is it happening?).

Throughout the discussion, we provide several examples of scientific education research, connecting them to scientific principles ( Chapter 3 ) and the features of education ( Chapter 4 ). We have chosen these studies because they align closely with several of the scientific principles. These examples include studies that generate hypotheses or conjectures as well as those that test them. Both tasks are essential to science, but as a general rule they cannot be accomplished simultaneously.

Moreover, just as we argue that the design of a study does not itself make it scientific, an investigation that seeks to address one of these questions is not necessarily scientific either. For example, many descriptive studies—however useful they may be—bear little resemblance to careful scientific study. They might record observations without any clear conceptual viewpoint, without reproducible protocols for recording data, and so

forth. Again, studies may be considered scientific by assessing the rigor with which they meet scientific principles and are designed to account for the context of the study.

Finally, we have tended to speak of research in terms of a simple dichotomy— scientific or not scientific—but the reality is more complicated. Individual research projects may adhere to each of the principles in varying degrees, and the extent to which they meet these goals goes a long way toward defining the scientific quality of a study. For example, while all scientific studies must pose clear questions that can be investigated empirically and be grounded in existing knowledge, more rigorous studies will begin with more precise statements of the underlying theory driving the inquiry and will generally have a well-specified hypothesis before the data collection and testing phase is begun. Studies that do not start with clear conceptual frameworks and hypotheses may still be scientific, although they are obviously at a more rudimentary level and will generally require follow-on study to contribute significantly to scientific knowledge.

Similarly, lines of research encompassing collections of studies may be more or less productive and useful in advancing knowledge. An area of research that, for example, does not advance beyond the descriptive phase toward more precise scientific investigation of causal effects and mechanisms for a long period of time is clearly not contributing as much to knowledge as one that builds on prior work and moves toward more complete understanding of the causal structure. This is not to say that descriptive work cannot generate important breakthroughs. However, the rate of progress should—as we discuss at the end of this chapter—enter into consideration of the support for advanced lines of inquiry. The three classes of questions we discuss in the remainder of this chapter are ordered in a way that reflects the sequence that research studies tend to follow as well as their interconnected nature.

WHAT IS HAPPENING?

Answers to “What is happening?” questions can be found by following Yogi Berra’s counsel in a systematic way: if you want to know what’s going on, you have to go out and look at what is going on. Such inquiries are descriptive. They are intended to provide a range of information from

documenting trends and issues in a range of geopolitical jurisdictions, populations, and institutions to rich descriptions of the complexities of educational practice in a particular locality, to relationships among such elements as socioeconomic status, teacher qualifications, and achievement.

Estimates of Population Characteristics

Descriptive scientific research in education can make generalizable statements about the national scope of a problem, student achievement levels across the states, or the demographics of children, teachers, or schools. Methods that enable the collection of data from a randomly selected sample of the population provide the best way of addressing such questions. Questionnaires and telephone interviews are common survey instruments developed to gather information from a representative sample of some population of interest. Policy makers at the national, state, and sometimes district levels depend on this method to paint a picture of the educational landscape. Aggregate estimates of the academic achievement level of children at the national level (e.g., National Center for Education Statistics [NCES], National Assessment of Educational Progress [NAEP]), the supply, demand, and turnover of teachers (e.g., NCES Schools and Staffing Survey), the nation’s dropout rates (e.g., NCES Common Core of Data), how U.S. children fare on tests of mathematics and science achievement relative to children in other nations (e.g., Third International Mathematics and Science Study) and the distribution of doctorate degrees across the nation (e.g., National Science Foundation’s Science and Engineering Indicators) are all based on surveys from populations of school children, teachers, and schools.

To yield credible results, such data collection usually depends on a random sample (alternatively called a probability sample) of the target population. If every observation (e.g., person, school) has a known chance of being selected into the study, researchers can make estimates of the larger population of interest based on statistical technology and theory. The validity of inferences about population characteristics based on sample data depends heavily on response rates, that is, the percentage of those randomly selected for whom data are collected. The measures used must have known reliability—that is, the extent to which they reproduce results. Finally, the value of a data collection instrument hinges not only on the

sampling method, participation rate, and reliability, but also on their validity: that the questionnaire or survey items measure what they are supposed to measure.

The NAEP survey tracks national trends in student achievement across several subject domains and collects a range of data on school, student, and teacher characteristics (see Box 5-1 ). This rich source of information enables several kinds of descriptive work. For example, researchers can estimate the average score of eighth graders on the mathematics assessment (i.e., measures of central tendency) and compare that performance to prior years. Part of the study we feature (see below) about college women’s career choices featured a similar estimation of population characteristics. In that study, the researchers developed a survey to collect data from a representative sample of women at the two universities to aid them in assessing the generalizability of their findings from the in-depth studies of the 23 women.

Simple Relationships

The NAEP survey also illustrates how researchers can describe patterns of relationships between variables. For example, NCES reports that in 2000, eighth graders whose teachers majored in mathematics or mathematics education scored higher, on average, than did students whose teachers did not major in these fields (U.S. Department of Education, 2000). This finding is the result of descriptive work that explores the correlation between variables: in this case, the relationship between student mathematics performance and their teachers’ undergraduate major.

Such associations cannot be used to infer cause. However, there is a common tendency to make unsubstantiated jumps from establishing a relationship to concluding cause. As committee member Paul Holland quipped during the committee’s deliberations, “Casual comparisons inevitably invite careless causal conclusions.” To illustrate the problem with drawing causal inferences from simple correlations, we use an example from work that compares Catholic schools to public schools. We feature this study later in the chapter as one that competently examines causal mechanisms. Before addressing questions of mechanism, foundational work involved simple correlational results that compared the performance of Catholic high school students on standardized mathematics tests with their

counterparts in public schools. These simple correlations revealed that average mathematics achievement was considerably higher for Catholic school students than for public school students (Bryk, Lee, and Holland, 1993). However, the researchers were careful not to conclude from this analysis that attending a Catholic school causes better student outcomes, because there are a host of potential explanations (other than attending a Catholic school) for this relationship between school type and achievement. For example, since Catholic schools can screen children for aptitude, they may have a more able student population than public schools at the outset. (This is an example of the classic selectivity bias that commonly threatens the validity of causal claims in nonrandomized studies; we return to this issue in the next section.) In short, there are other hypotheses that could explain the observed differences in achievement between students in different sectors that must be considered systematically in assessing the potential causal relationship between Catholic schooling and student outcomes.

Descriptions of Localized Educational Settings

In some cases, scientists are interested in the fine details (rather than the distribution or central tendency) of what is happening in a particular organization, group of people, or setting. This type of work is especially important when good information about the group or setting is non-existent or scant. In this type of research, then, it is important to obtain first-hand, in-depth information from the particular focal group or site. For such purposes, selecting a random sample from the population of interest may not be the proper method of choice; rather, samples may be purposively selected to illuminate phenomena in depth. 2 For example, to better understand a high-achieving school in an urban setting with children of predominantly low socioeconomic status, a researcher might conduct a detailed case study or an ethnographic study (a case study with a focus on culture) of such a school (Yin and White, 1986; Miles and Huberman,

1994). This type of scientific description can provide rich depictions of the policies, procedures, and contexts in which the school operates and generate plausible hypotheses about what might account for its success. Researchers often spend long periods of time in the setting or group in order to understand what decisions are made, what beliefs and attitudes are formed, what relationships are developed, and what forms of success are celebrated. These descriptions, when used in conjunction with causal methods, are often critical to understand such educational outcomes as student achievement because they illuminate key contextual factors.

Box 5-2 provides an example of a study that described in detail (and also modeled several possible mechanisms; see later discussion) a small group of women, half who began their college careers in science and half in what were considered more traditional majors for women. This descriptive part of the inquiry involved an ethnographic study of the lives of 23 first-year women enrolled in two large universities.

Scientific description of this type can generate systematic observations about the focal group or site, and patterns in results may be generalizable to other similar groups or sites or for the future. As with any other method, a scientifically rigorous case study has to be designed to address the research question it addresses. That is, the investigator has to choose sites, occasions, respondents, and times with a clear research purpose in mind and be sensitive to his or her own expectations and biases (Maxwell, 1996; Silverman, 1993). Data should typically be collected from varied sources, by varied methods, and corroborated by other investigators. Furthermore, the account of the case needs to draw on original evidence and provide enough detail so that the reader can make judgments about the validity of the conclusions (Yin, 2000).

Results may also be used as the basis for new theoretical developments, new experiments, or improved measures on surveys that indicate the extent of generalizability. In the work done by Holland and Eisenhart (1990), for example (see Box 5-2 ), a number of theoretical models were developed and tested to explain how women decide to pursue or abandon nontraditional careers in the fields they had studied in college. Their finding that commitment to college life—not fear of competing with men or other hypotheses that had previously been set forth—best explained these decisions was new knowledge. It has been shown in subsequent studies to

generalize somewhat to similar schools, though additional models seem to exist at some schools (Seymour and Hewitt, 1997).

Although such purposively selected samples may not be scientifically generalizable to other locations or people, these vivid descriptions often appeal to practitioners. Scientifically rigorous case studies have strengths and weaknesses for such use. They can, for example, help local decision makers by providing them with ideas and strategies that have promise in their educational setting. They cannot (unless combined with other methods) provide estimates of the likelihood that an educational approach might work under other conditions or that they have identified the right underlying causes. As we argue throughout this volume, research designs can often be strengthened considerably by using multiple methods— integrating the use of both quantitative estimates of population characteristics and qualitative studies of localized context.

Other descriptive designs may involve interviews with respondents or document reviews in a fairly large number of cases, such as 30 school districts or 60 colleges. Cases are often selected to represent a variety of conditions (e.g., urban/rural; east/west; affluent/poor). Such descriptive studies can be longitudinal, returning to the same cases over several years to see how conditions change.

These examples of descriptive work meet the principles of science, and have clearly contributed important insights to the base of scientific knowledge. If research is to be used to answer questions about “what works,” however, it must advance to other levels of scientific investigation such as those considered next.

IS THERE A SYSTEMATIC EFFECT?

Research designs that attempt to identify systematic effects have at their root an intent to establish a cause-and-effect relationship. Causal work is built on both theory and descriptive studies. In other words, the search for causal effects cannot be conducted in a vacuum: ideally, a strong theoretical base as well as extensive descriptive information are in place to provide the intellectual foundation for understanding causal relationships.

The simple question of “does x cause y ?” typically involves several different kinds of studies undertaken sequentially (Holland, 1993). In basic

terms, several conditions must be met to establish cause. Usually, a relationship or correlation between the variables is first identified. 3 Researchers also confirm that x preceded y in time (temporal sequence) and, crucially, that all presently conceivable rival explanations for the observed relationship have been “ruled out.” As alternative explanations are eliminated, confidence increases that it was indeed x that caused y . “Ruling out” competing explanations is a central metaphor in medical research, diagnosis, and other fields, including education, and it is the key element of causal queries (Campbell and Stanley 1963; Cook and Campbell 1979, 1986).

The use of multiple qualitative methods, especially in conjunction with a comparative study of the kind we describe in this section, can be particularly helpful in ruling out alternative explanations for the results observed (Yin, 2000; Weiss, in press). Such investigative tools can enable stronger causal inferences by enhancing the analysis of whether competing explanations can account for patterns in the data (e.g., unreliable measures or contamination of the comparison group). Similarly, qualitative methods can examine possible explanations for observed effects that arise outside of the purview of the study. For example, while an intervention was in progress, another program or policy may have offered participants opportunities similar to, and reinforcing of, those that the intervention provided. Thus, the “effects” that the study observed may have been due to the other program (“history” as the counterinterpretation; see Chapter 3 ). When all plausible rival explanations are identified and various forms of data can be used as evidence to rule them out, the causal claim that the intervention caused the observed effects is strengthened. In education, research that explores students’ and teachers’ in-depth experiences, observes their actions, and documents the constraints that affect their day-to-day activities provides a key source of generating plausible causal hypotheses.

We have organized the remainder of this section into two parts. The first treats randomized field trials, an ideal method when entities being examined can be randomly assigned to groups. Experiments are especially well-suited to situations in which the causal hypothesis is relatively simple. The second describes situations in which randomized field trials are not

feasible or desirable, and showcases a study that employed causal modeling techniques to address a complex causal question. We have distinguished randomized studies from others primarily to signal the difference in the strength with which causal claims can typically be made from them. The key difference between randomized field trials and other methods with respect to making causal claims is the extent to which the assumptions that underlie them are testable. By this simple criterion, nonrandomized studies are weaker in their ability to establish causation than randomized field trials, in large part because the role of other factors in influencing the outcome of interest is more difficult to gauge in nonrandomized studies. Other conditions that affect the choice of method are discussed in the course of the section.

Causal Relationships When Randomization Is Feasible

A fundamental scientific concept in making causal claims—that is, inferring that x caused y —is comparison. Comparing outcomes (e.g., student achievement) between two groups that are similar except for the causal variable (e.g., the educational intervention) helps to isolate the effect of that causal agent on the outcome of interest. 4 As we discuss in Chapter 4 , it is sometimes difficult to retain the sharpness of a comparison in education due to proximity (e.g., a design that features students in one classroom assigned to different interventions is subject to “spillover” effects) or human volition (e.g., teacher, parent, or student decisions to switch to another condition threaten the integrity of the randomly formed groups). Yet, from a scientific perspective, randomized trials (we also use the term “experiment” to refer to causal studies that feature random assignment) are the ideal for establishing whether one or more factors caused change in an outcome because of their strong ability to enable fair comparisons (Campbell and Stanley, 1963; Boruch, 1997; Cook and Payne, in press). Random allocation of students, classrooms, schools—whatever the unit of comparison may be—to different treatment groups assures that these comparison groups are, roughly speaking, equivalent at the time an intervention is introduced (that is, they do not differ systematically on account of hidden

influences) and chance differences between the groups can be taken into account statistically. As a result, the independent effect of the intervention on the outcome of interest can be isolated. In addition, these studies enable legitimate statistical statements of confidence in the results.

The Tennessee STAR experiment (see Chapter 3 ) on class-size reduction is a good example of the use of randomization to assess cause in an education study; in particular, this tool was used to gauge the effectiveness of an intervention. Some policy makers and scientists were unwilling to accept earlier, largely nonexperimental studies on class-size reduction as a basis for major policy decisions in the state. Those studies could not guarantee a fair comparison of children in small versus large classes because the comparisons relied on statistical adjustment rather than on actual construction of statistically equivalent groups. In Tennessee, statistical equivalence was achieved by randomly assigning eligible children and teachers to classrooms of different size. If the trial was properly carried out, 5 this randomization would lead to an unbiased estimate of the relative effect of class-size reduction and a statistical statement of confidence in the results.

Randomized trials are used frequently in the medical sciences and certain areas of the behavioral and social sciences, including prevention studies of mental health disorders (e.g., Beardslee, Wright, Salt, and Drezner, 1997), behavioral approaches to smoking cessation (e.g., Pieterse, Seydel, DeVries, Mudde, and Kok, 2001), and drug abuse prevention (e.g., Cook, Lawrence, Morse, and Roehl, 1984). It would not be ethical to assign individuals randomly to smoke and drink, and thus much of the evidence regarding the harmful effects of nicotine and alcohol comes from descriptive and correlational studies. However, randomized trials that show reductions in health detriments and improved social and behavioral functioning strengthen the causal links that have been established between drug use and adverse health and behavioral outcomes (Moses, 1995; Mosteller, Gilbert, and McPeek, 1980). In medical research, the relative effectiveness of the Salk vaccine (see Lambert and Markel, 2000) and streptomycin (Medical Research Council, 1948) was demonstrated through such trials. We have also learned about which drugs and surgical treatments are useless by depending on randomized controlled experiments (e.g., Schulte et al.,

2001; Gorman et al., 2001; Paradise et al., 1999). Randomized controlled trials are also used in industrial, market, and agricultural research.

Such trials are not frequently conducted in education research (Boruch, De Moya, and Snyder, in press). Nonetheless, it is not difficult to identify good examples in a variety of education areas that demonstrate their feasibility (see Boruch, 1997; Orr, 1999; and Cook and Payne, in press). For example, among the education programs whose effectiveness have been evaluated in randomized trials are the Sesame Street television series (Bogatz and Ball, 1972), peer-assisted learning and tutoring for young children with reading problems (Fuchs, Fuchs, and Kazdan, 1999), and Upward Bound (Myers and Schirm, 1999). And many of these trials have been successfully implemented on a large scale, randomizing entire classrooms or schools to intervention conditions. For numerous examples of trials in which schools, work places, and other entities are the units of random allocation and analysis, see Murray (1998), Donner and Klar (2000), Boruch and Foley (2000), and the Campbell Collaboration register of trials at http://campbell.gse.upenn.edu .

Causal Relationships When Randomization Is Not Feasible

In this section we discuss the conditions under which randomization is not feasible nor desirable, highlight alternative methods for addressing causal questions, and provide an illustrative example. Many nonexperimental methods and analytic approaches are commonly classified under the blanket rubric “quasi-experiment” because they attempt to approximate the underlying logic of the experiment without random assignment (Campbell and Stanley, 1963; Caporaso and Roos, 1973). These designs were developed because social science researchers recognized that in some social contexts (e.g., schools), researchers do not have the control afforded in laboratory settings and thus cannot always randomly assign units (e.g., classrooms).

Quasi-experiments (alternatively called observational studies), 6 for example, sometimes compare groups of interest that exist naturally (e.g.,

existing classes varying in size) rather than assigning them randomly to different conditions (e.g., assigning students to small, medium, or large class size). These studies must attempt to ensure fair comparisons through means other than randomization, such as by using statistical techniques to adjust for background variables that may account for differences in the outcome of interest. For example, researchers might come across schools that vary in the size of their classes and compare the achievement of students in large and small classes, adjusting for other differences among schools and children. If the class size conjecture holds after this adjustment is made, the researchers would expect students in smaller classes to have higher achievement scores than students in larger size classes. If indeed this difference is observed, the causal effect is more plausible.

The plausibility of the researchers’ causal interpretation, however, depends on some strong assumptions. They must assume that their attempts to equate schools and children were, indeed, successful. Yet, there is always the possibility that some unmeasured, prior existing difference among schools and children caused the effect, not the reduced class size. Or, there is the possibility that teachers with reduced classes were actively involved in school reform and that their increased effort and motivation (which might wane over time) caused the effect, not the smaller classes themselves. In short, these designs are less effective at eliminating competing plausible hypotheses with the same authority as a true experiment.

The major weakness of nonrandomized designs is selectivity bias—the counter-interpretation that the treatment did not cause the difference in outcomes but, rather, unmeasured prior existing differences (differential selectivity) between the groups did. 7 For example, a comparison of early literacy skills among low-income children who participated in a local preschool program and those who did not may be confounded by selectivity bias. That is, the parents of the children who were enrolled in preschool may be more motivated than other parents to provide reading experiences to their children at home, thus making it difficult to disentangle the several potential causes (e.g., preschool program or home reading experiences) for early reading success.

It is critical in such studies, then, to be aware of potential sources of bias and to measure them so their influence can be accounted for in relation to the outcome of interest. 8 It is when these biases are not known that quasi-experiments may yield misleading results. Thus, the scientific principle of making assumptions explicit and carefully attending to ruling out competing hypotheses about what caused a difference takes on heightened importance.

In some settings, well-controlled quasi-experiments may have greater “external validity”—generalizability to other people, times, and settings— than experiments with completely random assignment (Cronbach et al., 1980; Weiss, 1998a). It may be useful to take advantage of the experience and investment of a school with a particular program and try to design a quasi-experiment that compares the school that has a good implementation of the program to a similar school without the program (or with a different program). In such cases, there is less risk of poor implementation, more investment of the implementers in the program, and potentially greater impact. The findings may be more generalizable than in a randomized experiment because the latter may be externally mandated (i.e., by the researcher) and thus may not be feasible to implement in the “real-life” practice of education settings. The results may also have stronger external validity because if a school or district uses a single program, the possible contamination of different programs because teachers or administrators talk and interact will be reduced. Random assignment within a school at the level of the classroom or child often carries the risk of dilution or blending the programs. If assignment is truly random, such threats to internal validity will not bias the comparison of programs—just the estimation of the strength of the effects.

In the section above ( What Is Happening? ), we note that some kinds of correlational work make important contributions to understanding broad patterns of relationships among educational phenomena; here, we highlight a correlational design that allows causal inferences about the relationship between two or more variables. When correlational methods use what are called “model-fitting” techniques based on a theoretically gener-

ated system of variables, they permit stronger, albeit still tentative, causal inferences.

In Chapter 3 , we offer an example that illustrates the use of model-fitting techniques from the geophysical sciences that tested alternative hypotheses about the causes of glaciation. In Box 5-3 , we provide an example of causal modeling that shows the value of such techniques in education. This work examined the potential causal connection between teacher compensation and student dropout rates. Exploring this relationship is quite relevant to education policy, but it cannot be studied through a randomized field trail: teacher salaries, of course, cannot be randomly assigned nor can students be randomly assigned to those teachers. Because important questions like these often cannot be examined experimentally, statisticians have developed sophisticated model-fitting techniques to statistically rule out potential alternative explanations and deal with the problem of selection bias.

The key difference between simple correlational work and model-fitting is that the latter enhances causal attribution. In the study examining teacher compensation and dropout rates, for example, researchers introduced a conceptual model for the relationship between student outcomes and teacher salary, set forth an explicit hypothesis to test about the nature of that relationship, and assessed competing models of interpretation. By empirically rejecting competing theoretical models, confidence is increased in the explanatory power of the remaining model(s) (although other alternative models may also exist that provide a comparable fit to the data).

The study highlighted in Box 5-3 tested different models in this way. Loeb and Page (2000) took a fresh look at a question that had a good bit of history, addressing what appeared to be converging evidence that there was no causal relationship between teacher salaries and student outcomes. They reasoned that one possible explanation for these results was that the usual “production-function” model for the effects of salary on student outcomes was inadequately specified. Specifically, they hypothesized that nonpecuniary job characteristics and alternative wage opportunities that previous models had not accounted for may be relevant in understanding the relationship between teacher compensation and student outcomes. After incorporating these opportunity costs in their model and finding a sophisticated way to control the fact that wealthier parents are likely to send their

children to schools that pay teachers more, Loeb and Page found that raising teacher wages by 10 percent reduced high school dropout rates by 3 to 4 percent.

WHY OR HOW IS IT HAPPENING?

In many situations, finding that a causal agent ( x ) leads to the outcome ( y ) is not sufficient. Important questions remain about how x causes y . Questions about how things work demand attention to the processes and mechanisms by which the causes produce their effects. However, scientific research can also legitimately proceed in the opposite direction: that is, the search for mechanism can come before an effect has been established. For example, if the process by which an intervention influences student outcomes is established, researchers can often predict its effectiveness with known probability. In either case, the processes and mechanisms should be linked to theories so as to form an explanation for the phenomena of interest.

The search for causal mechanisms, especially once a causal effect has garnered strong empirical support, can use all of the designs we have discussed. In Chapter 2 , we trace a sequence of investigations in molecular biology that investigated how genes are turned on and off. Very different techniques, but ones that share the same basic intellectual approach to casual analysis reflected in these genetic studies, have yielded understandings in education. Consider, for example, the Tennessee class-size experiment (see discussion in Chapter 3 ). In addition to examining whether reduced class size produced achievement benefits, especially for minority students, a research team and others in the field asked (see, e.g., Grissmer, 1999) what might explain the Tennessee and other class-size effects. That is, what was the causal mechanism through which reduced class size affected achievement? To this end, researchers (Bohrnstedt and Stecher, 1999) used classroom observations and interviews to compare teaching in different class sizes. They conducted ethnographic studies in search of mechanism. They correlated measures of teaching behavior with student achievement scores. These questions are important because they enhance understanding of the foundational processes at work when class size is reduced and thus

improve the capacity to implement these reforms effectively in different times, places, and contexts.

Exploring Mechanism When Theory Is Fairly Well Established

A well-known study of Catholic schools provides another example of a rigorous attempt to understand mechanism (see Box 5-4 ). Previous and highly controversial work on Catholic schools (e.g., Coleman, Hoffer, and

Kilgore, 1982) had examined the relative benefits to students of Catholic and public schools. Drawing on these studies, as well as a fairly substantial literature related to effective schools, Bryk and his colleagues (Byrk, Lee, and Holland, 1993) focused on the mechanism by which Catholic schools seemed to achieve success relative to public schools. A series of models were developed (sector effects only, compositional effects, and school effects) and tested to explain the mechanism by which Catholic schools successfully achieve an equitable social distribution of academic achievement. The

researchers’ analyses suggested that aspects of school life that enhance a sense of community within Catholic schools most effectively explained the differences in student outcomes between Catholic and public schools.

Exploring Mechanism When Theory Is Weak

When the theoretical basis for addressing questions related to mechanism is weak, contested, or poorly understood, other types of methods may be more appropriate. These queries often have strong descriptive components and derive their strength from in-depth study that can illuminate unforeseen relationships and generate new insights. We provide two examples in this section of such approaches: the first is the ethnographic study of college women (see Box 5-2 ) and the second is a “design study” that resulted in a theoretical model for how young children learn the mathematical concepts of ratio and proportion.

After generating a rich description of women’s lives in their universities based on extensive analysis of ethnographic and survey data, the researchers turned to the question of why women who majored in nontraditional majors typically did not pursue those fields as careers (see Box 5-2 ). Was it because women were not well prepared before college? Were they discriminated against? Did they not want to compete with men? To address these questions, the researchers developed several theoretical models depicting commitment to schoolwork to describe how the women participated in college life. Extrapolating from the models, the researchers predicted what each woman would do after completing college, and in all cases, the models’ predictions were confirmed.

A second example highlights another analytic approach for examining mechanism that begins with theoretical ideas that are tested through the design, implementation, and systematic study of educational tools (curriculum, teaching methods, computer applets) that embody the initial conjectured mechanism. The studies go by different names; perhaps the two most popular names are “design studies” (Brown, 1992) and “teaching experiments” (Lesh and Kelly, 2000; Schoenfeld, in press).

Box 5-5 illustrates a design study whose aim was to develop and elaborate the theoretical mechanism by which ratio reasoning develops in young children and to build and modify appropriate tasks and assessments that

incorporate the models of learning developed through observation and interaction in the classroom. The work was linked to substantial existing literature in the field about the theoretical nature of ratio and proportion as mathematical ideas and teaching approaches to convey them (e.g., Behr, Lesh, Post, and Silver, 1983; Harel and Confrey, 1994; Mack, 1990, 1995). The initial model was tested and refined as careful distinctions and extensions were noted, explained, and considered as alternative explanations as the work progressed over a 3-year period, studying one classroom intensively. The design experiment methodology was selected because, unlike laboratory or other highly controlled approaches, it involved research within the complex interactions of teachers and students and allowed the everyday demands and opportunities of schooling to affect the investigation.

Like many such design studies, there were two main products of this work. First, through a theory-driven process of designing—and a data-driven process of refining—instructional strategies for teaching ratio and proportion, researchers produced an elaborated explanatory model of how young children come to understand these core mathematical concepts. Second, the instructional strategies developed in the course of the work itself hold promise because they were crafted based on a number of relevant research literatures. Through comparisons of achievement outcomes between children who received the new instruction and students in other classrooms and schools, the researchers provided preliminary evidence that the intervention designed to embody this theoretical mechanism is effective. The intervention would require further development, testing, and comparisons of the kind we describe in the previous section before it could be reasonably scaled up for widespread curriculum use.

Steffe and Thompson (2000) are careful to point out that design studies and teaching experiments must be conducted scientifically. In their words:

We use experiment in “teaching experiment” in a scientific sense…. What is important is that the teaching experiments are done to test hypotheses as well as to generate them. One does not embark on the intensive work of a teaching experiment without having major research hypotheses to test (p. 277).

This genre of method and approach is a relative newcomer to the field of education research and is not nearly as accepted as many of the other

methods described in this chapter. We highlight it here as an illustrative example of the creative development of new methods to embed the complex instructional settings that typify U.S. education in the research process. We echo Steffe and Thompson’s (2000) call to ensure a careful application of the scientific principles we describe in this report in the conduct of such research. 9

CONCLUDING COMMENTS

This chapter, building on the scientific principles outlined in Chapter 3 and the features of education that influence their application in education presented in Chapter 4 , illustrates that a wide range of methods can legitimately be employed in scientific education research and that some methods are better than others for particular purposes. As John Dewey put it:

We know that some methods of inquiry are better than others in just the same way in which we know that some methods of surgery, arming, road-making, navigating, or what-not are better than others. It does not follow in any of these cases that the “better” methods are ideally perfect…We ascertain how and why certain means and agencies have provided warrantably assertible conclusions, while others have not and cannot do so (Dewey, 1938, p. 104, italics in original).

The chapter also makes clear that knowledge is generated through a sequence of interrelated descriptive and causal studies, through a constant process of refining theory and knowledge. These lines of inquiry typically require a range of methods and approaches to subject theories and conjectures to scrutiny from several perspectives.

We conclude this chapter with several observations and suggestions about the current state of education research that we believe warrant attention if scientific understanding is to advance beyond its current state. We do not provide a comprehensive agenda for the nation. Rather, we

wish to offer constructive guidance by pointing to issues we have identified throughout our deliberations as key to future improvements.

First, there are a number of areas in education practice and policy in which basic theoretical understanding is weak. For example, very little is known about how young children learn ratio and proportion—mathematical concepts that play a key role in developing mathematical proficiency. The study we highlight in this chapter generated an initial theoretical model that must undergo sustained development and testing. In such areas, we believe priority should be given to descriptive and theory-building studies of the sort we highlight in this chapter. Scientific description is an essential part of any scientific endeavor, and education is no different. These studies are often extremely valuable in themselves, and they also provide the critical theoretical grounding needed to conduct causal studies. We believe that attention to the development and systematic testing of theories and conjectures across multiple studies and using multiple methods—a key scientific principle that threads throughout all of the questions and designs we have discussed—is currently undervalued in education relative to other scientific fields. The physical sciences have made progress by continuously developing and testing theories; something of that nature has not been done systematically in education. And while it is not clear that grand, unifying theories exist in the social world, conceptual understanding forms the foundation for scientific understanding and progresses—as we showed in Chapter 2 —through the systematic assessment and refinement of theory.

Second, while large-scale education policies and programs are constantly undertaken, we reiterate our belief that they are typically launched without an adequate evidentiary base to inform their development, implementation, or refinement over time (Campbell, 1969; President’s Committee of Advisors on Science and Technology, 1997). The “demand” for education research in general, and education program evaluation in particular, is very difficult to quantify, but we believe it tends to be low from educators, policy makers, and the public. There are encouraging signs that public attitudes toward the use of objective evidence to guide decisions is improving (e.g., statutory requirements to set aside a percentage of annual appropriations to conduct evaluations of federal programs, the Government Performance and Results Act, and common rhetoric about “evidence-based” and “research-based” policy and practice). However, we believe stronger

scientific knowledge is needed about educational interventions to promote its use in decision making.

In order to generate a rich store of scientific evidence that could enhance effective decision making about education programs, it will be necessary to strengthen a few related strands of work. First, systematic study is needed about the ways that programs are implemented in diverse educational settings. We view implementation research—the genre of research that examines the ways that the structural elements of school settings interact with efforts to improve instruction—as a critical, underfunded, and underappreciated form of education research. We also believe that understanding how to “scale up” (Elmore, 1996) educational interventions that have promise in a small number of cases will depend critically on a deep understanding of how policies and practices are adopted and sustained (Rogers, 1995) in the complex U.S. education system. 10

In all of this work, more knowledge is needed about causal relationships. In estimating the effects of programs, we urge the expanded use of random assignment. Randomized experiments are not perfect. Indeed, the merits of their use in education have been seriously questioned (Cronbach et al., 1980; Cronbach, 1982; Guba and Lincoln, 1981). For instance, they typically cannot test complex causal hypotheses, they may lack generalizability to other settings, and they can be expensive. However, we believe that these and other issues do not generate a compelling rationale against their use in education research and that issues related to ethical concerns, political obstacles, and other potential barriers often can be resolved. We believe that the credible objections to their use that have been raised have clarified the purposes, strengths, limitations, and uses of randomized experiments as well as other research methods in education. Establishing cause is often exceedingly important—for example, in the large-scale deployment of interventions—and the ambiguity of correlational studies or quasi-experiments can be undesirable for practical purposes.

In keeping with our arguments throughout this report, we also urge that randomized field trials be supplemented with other methods, including in-depth qualitative approaches that can illuminate important nuances,

identify potential counterhypotheses, and provide additional sources of evidence for supporting causal claims in complex educational settings.

In sum, theory building and rigorous studies of implementations and interventions are two broad-based areas that we believe deserve attention. Within the framework of a comprehensive research agenda, targeting these aspects of research will build on the successes of the enterprise we highlight throughout this report.

Researchers, historians, and philosophers of science have debated the nature of scientific research in education for more than 100 years. Recent enthusiasm for "evidence-based" policy and practice in education—now codified in the federal law that authorizes the bulk of elementary and secondary education programs—have brought a new sense of urgency to understanding the ways in which the basic tenets of science manifest in the study of teaching, learning, and schooling.

Scientific Research in Education describes the similarities and differences between scientific inquiry in education and scientific inquiry in other fields and disciplines and provides a number of examples to illustrate these ideas. Its main argument is that all scientific endeavors share a common set of principles, and that each field—including education research—develops a specialization that accounts for the particulars of what is being studied. The book also provides suggestions for how the federal government can best support high-quality scientific research in education.

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

University of Pittsburgh logo

Apply Request Information

  • Our Philosophy, Mission & Goals
  • Message from the Dean
  • Our Rankings
  • Accreditation
  • Faculty, Research & Funding
  • Diversity, Equity and Inclusion Committee Members
  • Cultural Humility
  • Improving Cultural Sensitivity
  • Addressing Health Inequities
  • Acute & Tertiary Care
  • Health & Community Systems
  • Health Promotion & Development
  • Nurse Anesthesia
  • Office of the Dean
  • Student Affairs & Alumni Relations
  • Community Partnerships
  • Professional Development & Continuing Nursing Education
  • International Partnerships
  • Sim/Skills Lab Guidelines
  • Directions, Maps, and Parking
  • Enrollment Profile
  • HSIT Helpdesk & Media Support
  • Our Facilities
  • Distance Education
  • COVID Information
  • High School Requirements
  • Bachelor of Science in Nursing
  • Internal Transfer Applicants
  • External Transfer Applicants
  • Master of Science in Nursing
  • Doctor of Nursing Practice
  • Dual DNP/PhD Program
  • PhD in Nursing
  • International Applicants
  • Non-Degree Student Information
  • Clinical Experiences
  • About Our BSN Program
  • BSN Application/Admission
  • BSN Program Student Learning Outcomes
  • Traditional Undergraduate BSN Program
  • Curriculum: Class of 2021
  • Curriculum: Class of 2022 and 2023
  • Curriculum: Class 2024
  • Curriculum, Class of 2025 and 2026
  • Curriculum: Class 2027
  • What are the two pathways of the Nursing Honors Program?
  • Admission Requirements for Honors Curriculum
  • Admission Requirement for for the Honor Curriculum
  • FAQ about Nursing Honors Curriculum Option and Thesis Option
  • About our Accelerated 2nd Degree BSN
  • Accelerated 2nd Degree BSN Application/Admission
  • FAQ: Accelerated 2nd Degree BSN
  • Facts about Accelerated 2nd Degree BSN Students
  • Curriculum - (Cohorts admitted prior to Spring 2019)
  • Curriculum (Effective Spring 2019 Admits)
  • Information Sessions
  • RN Options - RN-BSN
  • RN Options Curriculum- Early Admission to MSN or DNP
  • Clinical Nurse Leader (CNL)
  • Nursing Informatics
  • MSN Application/Admission
  • Masters Student Learning Outcomes
  • School Nursing, MSN
  • Clinical Nurse Specialist
  • Nurse Practitioner BSN to DNP Distance Education Format
  • Health Systems Executive Leadership
  • Nurse Anesthesia Major
  • Nurse-Midwife Major (onsite)
  • Nurse Practitioner Major
  • DNP Application/Admission
  • DNP Student Learning Outcomes
  • Doctor of Nursing Practice and Doctor of Philosophy (PhD) Dual Program Application/Admission Criteria
  • Purpose & Student Learning Outcomes
  • PhD Application/Admission
  • PhD Dissertation
  • Curricula BSN-PhD
  • Curricula MSN-PhD
  • Detailed MSN to PhD Curriculum
  • Detailed PhD Curriculum
  • Graduate Minors & Certificates
  • Electronic Thesis and Dissertation (ETD)
  • PHD Student Profiles
  • Online Graduate Programs
  • Post-Baccalaureate Certificate In Health Care Genetics
  • School Nurse Certificate (online)
  • Certificate Adult-Gerontology Acute Care Nurse Practitioner
  • Certificate Neonatal Nurse Practitioner (NNP)
  • Certificate: Psychiatric Mental Health Nurse Practitioner
  • Nursing Education Certificate
  • Nursing Informatics Certificate
  • Certificate in Gerontology for Nurse Practitioners
  • Pediatric Acute Care Nurse Practitioner Certificate
  • Gerontology Minor for Nurse Practitioners
  • Health Care Genetics Minor
  • Nursing Education Minor
  • Nursing Informatics Minor
  • Nursing Administration Minor
  • Elective Undergraduate Courses
  • Graduate Independent Studies/Elective Courses
  • Graduate NCLEX Pass Rates
  • Adult Gerontology Acute Care Nurse Practitioner Yearly Pass Rate
  • Adult Gerontology Primary Care Nurse Practitioner Year
  • Clinical Nurse Leader Year
  • Clinical Nurse Specialist Year
  • Family Nurse Practitioner Year
  • Nurse Anesthetist Year
  • Neonatal Nurse Practitioner - MSN Year
  • Nurse-Midwife
  • Primary Care Pediatric Nurse Practitioner Year
  • Psychiatric Mental Health Nurse Practitioner Year
  • About Pittsburgh
  • Black Student Nurses Association
  • Chi Eta Phi, Kappa Beta Chapter
  • Graduate Nursing Students Organization (GNSO)
  • Men In Nursing Club (MINC)
  • Sigma Theta Tau, Eta Chapter
  • Nursing Student Association (NSA)
  • Nursing Study Abroad FAQs
  • Student Lockers
  • T32 NR008857 Grant
  • T32NR009759 Grant
  • Office of Research and Scholarship
  • HUB for Excellence in Digital Health Research
  • Genomics of Patient Outcomes HUB
  • Sleep and Circadian Science Research Hub
  • Nursing Health Services and Policy Research HUB
  • Cancer Survivorship HUB
  • Maternal/Perinatal and Reproductive Health Research Hub
  • Aging and Gerontological Nursing Research HUB
  • Nursing Education Research and Scholarship HUB
  • Active Grants
  • In Their Own Words
  • URMP Research Projects
  • Celebrating Student Research
  • Professional Development – Vision, Mission, Goals
  • Live CE Activities
  • New - Healthcare Provider Training on LGBTQIA+ Health: An Introductory Module on Best Practices
  • SBIRT for Undergraduate Nursing Students
  • SBIRT for Graduate Nurse Practitioner Students
  • Fetal Alcohol Spectrum Disorder (FASD) and Alcohol Screening and Brief Intervention (Alcohol SBI)
  • NCPD Needs Assessment
  • Call for Volunteer Content Experts & Speakers
  • Stay Connected
  • About our Alumni
  • Distinguished Alumni Awardees Archive
  • Outstanding Young Alumni Awardees Archive
  • Honorary Alumni Awardees Archive
  • Notable Alumni
  • Volunteer Opportunities
  • Keep Us Up-to-Date!
  • Upcoming Alumni Events
  • Nursing Professional Alumni Society
  • News & Announcements
  • Submitting to Pitt Nurse Magazine
  • Nursing Research and Scholarship HUBS

Educational Research Design

How do i choose a research design.

Research methodology should be determined before conducting any research. There are three different types of research methodology you can select based on the nature of the study: qualitative research, quantitative research, and mixed methods research.

  • Qualitative research focuses on exploring and understanding how individuals/groups experience, perceive, and experience a social or human problem. The process involves collecting and analyzing non-numerical data offering rich meaning to the interpretation.
  • Quantitative research is an investigation of phenomena by collecting and analyzing numerical data to test strategies, theories, techniques, or assumptions.
  • Mixed methods research collects and analyzes using both qualitative and quantitative approaches, and the mixing of both approaches in a study. Thus, it allows the researcher to maximize the strengths of each approach and explore diverse perspectives to a comprehensive understanding of phenomena.  

Educational Research Designs

Descriptive Research: Naturalistic Observation Intro to Systematic Reviews and Meta-Analyses Genie Wiley--An Overview Observer Bias: Clever Horses and Dull Rats o Additional Resources (Web/PDF) - Naturalistic Observation: Definition, guide, and examples

  • Descriptive Research: Naturalistic Observation
  • Intro to Systematic Reviews and Meta-Analyses
  • Genie Wiley--An Overview
  • Observer Bias: Clever Horses and Dull Rats
  • Naturalistic Observation: Definition, guide, and examples
  • Correlation Versus Causation
  • Linear Regression and Correlation
  • The Danger of Mixing Up Causality and Correlation
  • The Correlation Coefficient-Explained in 3 steps
  • Survey and Correlational Research Designs
  • Qualitative Research Design
  • Introduction to the Process of Qualitative Research
  • Participant Observation as a Research Method
  • What is qualitative research?
  • Introduction to qualitative nursing research
  • Qualitative research ethics
  • Selective recording
  • Research Methodology Overview of Qualitative Research
  • Narrative Inquiry
  • Critical Theory
  • Critical Race Theory
  • Narrative Inquiry: Educational examples
  • Research Methods for Studying Narrative Identity: A Primer
  • Research findings of the “1958 British Birth Cohort”
  • Quasi-Experimental Designs
  • Quasi-Experimental Research
  • Pre-test Post-test design
  • Quasi-Experimental and Single-Case Experimental Designs
  • One-Way Between Subjects ANOVA
  • Independent Samples t-test in SPSS
  • Two-Way Factorial ANOVA
  • One-Way Repeated Measures ANOVA
  • A Closer Look at Designs
  • Developing Mixed Methods Research
  • Mixed Methods Research: The Basics
  • Mixed Methods Research: Detailed
  • Realist Review: Mixing Method
  • Mixed Methods in Education Research
  • Choosing a Mixed Methods Design
  • Mixed Methods Research: Definition, guide, and examples
  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Research Methods – Types, Examples and Guide

Research Methods – Types, Examples and Guide

Table of Contents

Research Methods

Research Methods

Definition:

Research Methods refer to the techniques, procedures, and processes used by researchers to collect , analyze, and interpret data in order to answer research questions or test hypotheses. The methods used in research can vary depending on the research questions, the type of data that is being collected, and the research design.

Types of Research Methods

Types of Research Methods are as follows:

Qualitative research Method

Qualitative research methods are used to collect and analyze non-numerical data. This type of research is useful when the objective is to explore the meaning of phenomena, understand the experiences of individuals, or gain insights into complex social processes. Qualitative research methods include interviews, focus groups, ethnography, and content analysis.

Quantitative Research Method

Quantitative research methods are used to collect and analyze numerical data. This type of research is useful when the objective is to test a hypothesis, determine cause-and-effect relationships, and measure the prevalence of certain phenomena. Quantitative research methods include surveys, experiments, and secondary data analysis.

Mixed Method Research

Mixed Method Research refers to the combination of both qualitative and quantitative research methods in a single study. This approach aims to overcome the limitations of each individual method and to provide a more comprehensive understanding of the research topic. This approach allows researchers to gather both quantitative data, which is often used to test hypotheses and make generalizations about a population, and qualitative data, which provides a more in-depth understanding of the experiences and perspectives of individuals.

Key Differences Between Research Methods

The following Table shows the key differences between Quantitative, Qualitative and Mixed Research Methods

Examples of Research Methods

Examples of Research Methods are as follows:

Qualitative Research Example:

A researcher wants to study the experience of cancer patients during their treatment. They conduct in-depth interviews with patients to gather data on their emotional state, coping mechanisms, and support systems.

Quantitative Research Example:

A company wants to determine the effectiveness of a new advertisement campaign. They survey a large group of people, asking them to rate their awareness of the product and their likelihood of purchasing it.

Mixed Research Example:

A university wants to evaluate the effectiveness of a new teaching method in improving student performance. They collect both quantitative data (such as test scores) and qualitative data (such as feedback from students and teachers) to get a complete picture of the impact of the new method.

Applications of Research Methods

Research methods are used in various fields to investigate, analyze, and answer research questions. Here are some examples of how research methods are applied in different fields:

  • Psychology : Research methods are widely used in psychology to study human behavior, emotions, and mental processes. For example, researchers may use experiments, surveys, and observational studies to understand how people behave in different situations, how they respond to different stimuli, and how their brains process information.
  • Sociology : Sociologists use research methods to study social phenomena, such as social inequality, social change, and social relationships. Researchers may use surveys, interviews, and observational studies to collect data on social attitudes, beliefs, and behaviors.
  • Medicine : Research methods are essential in medical research to study diseases, test new treatments, and evaluate their effectiveness. Researchers may use clinical trials, case studies, and laboratory experiments to collect data on the efficacy and safety of different medical treatments.
  • Education : Research methods are used in education to understand how students learn, how teachers teach, and how educational policies affect student outcomes. Researchers may use surveys, experiments, and observational studies to collect data on student performance, teacher effectiveness, and educational programs.
  • Business : Research methods are used in business to understand consumer behavior, market trends, and business strategies. Researchers may use surveys, focus groups, and observational studies to collect data on consumer preferences, market trends, and industry competition.
  • Environmental science : Research methods are used in environmental science to study the natural world and its ecosystems. Researchers may use field studies, laboratory experiments, and observational studies to collect data on environmental factors, such as air and water quality, and the impact of human activities on the environment.
  • Political science : Research methods are used in political science to study political systems, institutions, and behavior. Researchers may use surveys, experiments, and observational studies to collect data on political attitudes, voting behavior, and the impact of policies on society.

Purpose of Research Methods

Research methods serve several purposes, including:

  • Identify research problems: Research methods are used to identify research problems or questions that need to be addressed through empirical investigation.
  • Develop hypotheses: Research methods help researchers develop hypotheses, which are tentative explanations for the observed phenomenon or relationship.
  • Collect data: Research methods enable researchers to collect data in a systematic and objective way, which is necessary to test hypotheses and draw meaningful conclusions.
  • Analyze data: Research methods provide tools and techniques for analyzing data, such as statistical analysis, content analysis, and discourse analysis.
  • Test hypotheses: Research methods allow researchers to test hypotheses by examining the relationships between variables in a systematic and controlled manner.
  • Draw conclusions : Research methods facilitate the drawing of conclusions based on empirical evidence and help researchers make generalizations about a population based on their sample data.
  • Enhance understanding: Research methods contribute to the development of knowledge and enhance our understanding of various phenomena and relationships, which can inform policy, practice, and theory.

When to Use Research Methods

Research methods are used when you need to gather information or data to answer a question or to gain insights into a particular phenomenon.

Here are some situations when research methods may be appropriate:

  • To investigate a problem : Research methods can be used to investigate a problem or a research question in a particular field. This can help in identifying the root cause of the problem and developing solutions.
  • To gather data: Research methods can be used to collect data on a particular subject. This can be done through surveys, interviews, observations, experiments, and more.
  • To evaluate programs : Research methods can be used to evaluate the effectiveness of a program, intervention, or policy. This can help in determining whether the program is meeting its goals and objectives.
  • To explore new areas : Research methods can be used to explore new areas of inquiry or to test new hypotheses. This can help in advancing knowledge in a particular field.
  • To make informed decisions : Research methods can be used to gather information and data to support informed decision-making. This can be useful in various fields such as healthcare, business, and education.

Advantages of Research Methods

Research methods provide several advantages, including:

  • Objectivity : Research methods enable researchers to gather data in a systematic and objective manner, minimizing personal biases and subjectivity. This leads to more reliable and valid results.
  • Replicability : A key advantage of research methods is that they allow for replication of studies by other researchers. This helps to confirm the validity of the findings and ensures that the results are not specific to the particular research team.
  • Generalizability : Research methods enable researchers to gather data from a representative sample of the population, allowing for generalizability of the findings to a larger population. This increases the external validity of the research.
  • Precision : Research methods enable researchers to gather data using standardized procedures, ensuring that the data is accurate and precise. This allows researchers to make accurate predictions and draw meaningful conclusions.
  • Efficiency : Research methods enable researchers to gather data efficiently, saving time and resources. This is especially important when studying large populations or complex phenomena.
  • Innovation : Research methods enable researchers to develop new techniques and tools for data collection and analysis, leading to innovation and advancement in the field.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Questionnaire

Questionnaire – Definition, Types, and Examples

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

Institutional Review Board (IRB)

Institutional Review Board – Application Sample...

Prep With Harshita

Prep With Harshita

Type of Research

Types of Educational Research

The three main types of educational research according to purpose are fundamental, applied, and action research.

Fundamental research:

Fundamental research, also known as basic research, is focused on generating new knowledge and understanding of fundamental principles and concepts in the field of education. This type of research is primarily concerned with advancing theoretical knowledge and developing new concepts, theories, and models that can be used to inform educational practices. It is often conducted in universities and research institutions, and it involves the use of various research methods such as surveys, experiments, and case studies.

Fundamental research is important for laying the groundwork for applied research and for advancing the knowledge and understanding of key educational concepts and principles. It helps researchers and practitioners to better understand the underlying factors that contribute to successful educational outcomes and to develop new approaches and strategies for addressing educational challenges.

Applied research:

Applied research, also known as practical research, is focused on solving real-world problems and addressing specific issues in the field of education. This type of research is designed to produce practical and useful knowledge that can be applied in educational settings. It is often conducted in educational institutions, government agencies, and non-profit organizations, and it involves the use of various research methods such as surveys, experiments, and case studies.

Applied research is important for developing evidence-based practices and policies that can improve educational outcomes. It helps to identify effective strategies and interventions for addressing educational challenges and improving student learning. Examples of applied research include studies on the effectiveness of teaching methods, interventions for improving student motivation, and assessments of educational programs and policies.

Action research:

Action research is a type of research that is conducted by educators in their own classrooms or educational settings. The aim of action research is to improve teaching and learning outcomes by identifying and implementing effective strategies and practices. This type of research involves a cyclical process of planning, action, observation, and reflection, with the goal of improving educational practices and outcomes.

Action research is important for empowering educators to take an active role in improving educational outcomes in their own settings. It helps to build capacity among educators for identifying and addressing educational challenges and for implementing evidence-based practices. Examples of action research include studies on the effectiveness of different teaching strategies, the impact of technology on student learning, and the effectiveness of different assessment methods.

Also Read : Exploratory Method

types of research studies in education

Types of Research according to purpose

Also Visit : Prep with Harshita

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Make your learning journey easy and much interesting

Georgia Gwinnett College Kaufman Library logo
  • Cohort study a nonexperimental design that can be prospective or retrospective. In a prospective cohort study, participants are enrolled before the potential causal event has occurred. In a retrospective cohort study, the study begins after the dependent event occurs. See also "longitudinal study."
  • Cross-sectional design study an experimental design in which multiple measures are collected over a period of time from two or more groups of different ages (birth cohorts), ethnicities, or other factos. These designs combine aspects of longitudinal design and cohort-sequential design.
  • Literature review a narrative summary and evaluation of the findings or theories within a literature base. Also known as "narrative literature review."
  • Longitudinal study a study that involves the observation of a variable or group of variables in the same cases or individuals using the same set of measurements (or attributes) over a period of time (i.e., at multiple times or occasions). A longitudinal study that evaluates a group of randomly chosen individuals is referred to as a panel study, whereas a longitudinal study that evaluates a group of individuals possessing some common characteristic (usually age) is referred to as a cohort study. This multiple observational structure may be combined with almost any other research design—ones with and without experimental manipulations, randomized clinical trials, or any other study type. Also known as "longitudinal research," "longitudinal design."
  • Prospective sampling (cohort) a sampling method in which cases are selected for inclusion in experiments or other research based on their exposure to a risk factor. Participants are then followed to see if a condition of interest develops.
  • Qualitatiive research study approaches to research used to generate knowledge about human experience and/or action, including social processes. These research methods typically produce descriptive (non-numerical) data, such as observations of behavior or personal accounts of experiences. The goal of gathering qualitative data is to examine how individuals perceive the world from different vantage points. Also known as "qualitative design," "qualitative inquiry," "qualitative method," "qualitative study." more... less... Qualitative methods share four central characteristics: Involve the analysis of natural language and other forms of human expression rather than the translation of meaning into numbersCentralize an iterative process in which data are analyzed and meanings are generated in a circular and self-correcting process of checking and refining findingsSeek to present findings in a manner that emphasizes the study's context and situation in timeRecursively combine inquiry with methods that require researchers' reflexivity (i.e., self-examination) about their influence upon the research process.
  • Qualitative meta analysis study a form of inquiry in which qualitative research findings about a process or experience are aggregated or integrated across research studies. Aims can involve synthesizing qualitative findings across primary studies, generating new theoretical or conceptual models, identifying gaps in research, or generating new questions.
  • Quantitative research study approaches to research in which observed outcomes are numerically represented. These research methods rely on measuring variables using a numerical system, analyzing measurements using statistical models, and reporting relationships and associations among the studied variables. The goal of gathering quantitative data is to understand, describe, and predict the nature of a phenomenon, particularly through the development of models and theories. Also known as "quantitative design," "quantitative inquiry," "quantitative method," "quantitative study."
  • Quantitative Meta analysis a technique for synthesizing the results of multiple studies of a phenomenon by combining the effect size estimates from each study into a single estimate of the combined effect size or into a distribution of effect sizes. Effect size estimates from individual studies are the inputs to the analyses. Although meta-analyses are ideally suited for summarizing a body of literature in terms of impact, limitations, and implications, they are limited by having no required minimum number of studies or participants. Information of potential interest may also be missing from the original research reports upon which the procedure must rely.
  • Randomized controlled (clinical) trial an experimental design in which patients are randomly assigned to a group that will receive an experimental treatment, such as a new drug, or to one that will receive a comparison treatment, a standard-of-care treatment, or a placebo. The random assignment occurs after recruitment and assessment of eligibility but before the intervention. There may be multiple experimental and comparison groups, but each patient is assigned to one group only.
  • Retrospective cohort study (sampling) the study begins after the dependent event occurs; a technique in which participants or cases from the general population are selected for inclusion in experiments or other research based on their previous exposure to a risk factor or the completion of some particular process. Participants are then examined in the present to see if a particular condition or state exists, often in comparison to others who were not exposed to the risk or who did not complete the particular process.
  • Please consult the following sources for more information on these types of studies and terminology related to the studies.

    • APA Style JARS Supplemental Glossary This webpage provides supplemental information on the terms used in APA Style JARS. This glossary is meant to supplement Chapter 3 of the Publication Manual of the American Psychological Association, Seventh Edition. It is not an exhaustive list of all terms employed in quantitative, qualitative, or mixed methods research, nor does it include all possible definitions for each term; definitions in addition to or different from those reported in this glossary may be found in other sources.
    • APA Dictionary of Psychology More than 25,000 authoritative entries across 90 subfields of psychology.
    • << Previous: Types of Research/Scholarly Articles
    • Next: Critical Appraisal of Research Articles/Studies >>
    • Last Updated: Mar 8, 2024 3:12 PM
    • URL: https://libguides.ggc.edu/exercisescience

    U.S. flag

    An official website of the United States government

    The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

    The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

    • Publications
    • Account settings

    Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

    • Advanced Search
    • Journal List
    • Indian J Anaesth
    • v.60(9); 2016 Sep

    Types of studies and research design

    Mukul chandra kapoor.

    Department of Anesthesiology, Max Smart Super Specialty Hospital, New Delhi, India

    Medical research has evolved, from individual expert described opinions and techniques, to scientifically designed methodology-based studies. Evidence-based medicine (EBM) was established to re-evaluate medical facts and remove various myths in clinical practice. Research methodology is now protocol based with predefined steps. Studies were classified based on the method of collection and evaluation of data. Clinical study methodology now needs to comply to strict ethical, moral, truth, and transparency standards, ensuring that no conflict of interest is involved. A medical research pyramid has been designed to grade the quality of evidence and help physicians determine the value of the research. Randomised controlled trials (RCTs) have become gold standards for quality research. EBM now scales systemic reviews and meta-analyses at a level higher than RCTs to overcome deficiencies in the randomised trials due to errors in methodology and analyses.

    INTRODUCTION

    Expert opinion, experience, and authoritarian judgement were the norm in clinical medical practice. At scientific meetings, one often heard senior professionals emphatically expressing ‘In my experience,…… what I have said is correct!’ In 1981, articles published by Sackett et al . introduced ‘critical appraisal’ as they felt a need to teach methods of understanding scientific literature and its application at the bedside.[ 1 ] To improve clinical outcomes, clinical expertise must be complemented by the best external evidence.[ 2 ] Conversely, without clinical expertise, good external evidence may be used inappropriately [ Figure 1 ]. Practice gets outdated, if not updated with current evidence, depriving the clientele of the best available therapy.

    An external file that holds a picture, illustration, etc.
Object name is IJA-60-626-g001.jpg

    Triad of evidence-based medicine

    EVIDENCE-BASED MEDICINE

    In 1971, in his book ‘Effectiveness and Efficiency’, Archibald Cochrane highlighted the lack of reliable evidence behind many accepted health-care interventions.[ 3 ] This triggered re-evaluation of many established ‘supposed’ scientific facts and awakened physicians to the need for evidence in medicine. Evidence-based medicine (EBM) thus evolved, which was defined as ‘the conscientious, explicit and judicious use of the current best evidence in making decisions about the care of individual patients.’[ 2 ]

    The goal of EBM was scientific endowment to achieve consistency, efficiency, effectiveness, quality, safety, reduction in dilemma and limitation of idiosyncrasies in clinical practice.[ 4 ] EBM required the physician to diligently assess the therapy, make clinical adjustments using the best available external evidence, ensure awareness of current research and discover clinical pathways to ensure best patient outcomes.[ 5 ]

    With widespread internet use, phenomenally large number of publications, training and media resources are available but determining the quality of this literature is difficult for a busy physician. Abstracts are available freely on the internet, but full-text articles require a subscription. To complicate issues, contradictory studies are published making decision-making difficult.[ 6 ] Publication bias, especially against negative studies, makes matters worse.

    In 1993, the Cochrane Collaboration was founded by Ian Chalmers and others to create and disseminate up-to-date review of randomised controlled trials (RCTs) to help health-care professionals make informed decisions.[ 7 ] In 1995, the American College of Physicians and the British Medical Journal Publishing Group collaborated to publish the journal ‘Evidence-based medicine’, leading to the evolution of EBM in all spheres of medicine.

    MEDICAL RESEARCH

    Medical research needs to be conducted to increase knowledge about the human species, its social/natural environment and to combat disease/infirmity in humans. Research should be conducted in a manner conducive to and consistent with dignity and well-being of the participant; in a professional and transparent manner; and ensuring minimal risk.[ 8 ] Research thus must be subjected to careful evaluation at all stages, i.e., research design/experimentation; results and their implications; the objective of the research sought; anticipated benefits/dangers; potential uses/abuses of the experiment and its results; and on ensuring the safety of human life. Table 1 lists the principles any research should follow.[ 8 ]

    General principles of medical research

    An external file that holds a picture, illustration, etc.
Object name is IJA-60-626-g002.jpg

    Types of study design

    Medical research is classified into primary and secondary research. Clinical/experimental studies are performed in primary research, whereas secondary research consolidates available studies as reviews, systematic reviews and meta-analyses. Three main areas in primary research are basic medical research, clinical research and epidemiological research [ Figure 2 ]. Basic research includes fundamental research in fields shown in Figure 2 . In almost all studies, at least one independent variable is varied, whereas the effects on the dependent variables are investigated. Clinical studies include observational studies and interventional studies and are subclassified as in Figure 2 .

    An external file that holds a picture, illustration, etc.
Object name is IJA-60-626-g003.jpg

    Classification of types of medical research

    Interventional clinical study is performed with the purpose of studying or demonstrating clinical or pharmacological properties of drugs/devices, their side effects and to establish their efficacy or safety. They also include studies in which surgical, physical or psychotherapeutic procedures are examined.[ 9 ] Studies on drugs/devices are subject to legal and ethical requirements including the Drug Controller General India (DCGI) directives. They require the approval of DCGI recognized Ethics Committee and must be performed in accordance with the rules of ‘Good Clinical Practice’.[ 10 ] Further details are available under ‘Methodology for research II’ section in this issue of IJA. In 2004, the World Health Organization advised registration of all clinical trials in a public registry. In India, the Clinical Trials Registry of India was launched in 2007 ( www.ctri.nic.in ). The International Committee of Medical Journal Editors (ICMJE) mandates its member journals to publish only registered trials.[ 11 ]

    Observational clinical study is a study in which knowledge from treatment of persons with drugs is analysed using epidemiological methods. In these studies, the diagnosis, treatment and monitoring are performed exclusively according to medical practice and not according to a specified study protocol.[ 9 ] They are subclassified as per Figure 2 .

    Epidemiological studies have two basic approaches, the interventional and observational. Clinicians are more familiar with interventional research, whereas epidemiologists usually perform observational research.

    Interventional studies are experimental in character and are subdivided into field and group studies, for example, iodine supplementation of cooking salt to prevent hypothyroidism. Many interventions are unsuitable for RCTs, as the exposure may be harmful to the subjects.

    Observational studies can be subdivided into cohort, case–control, cross-sectional and ecological studies.

    • Cohort studies are suited to detect connections between exposure and development of disease. They are normally prospective studies of two healthy groups of subjects observed over time, in which one group is exposed to a specific substance, whereas the other is not. The occurrence of the disease can be determined in the two groups. Cohort studies can also be retrospective
    • Case–control studies are retrospective analyses performed to establish the prevalence of a disease in two groups exposed to a factor or disease. The incidence rate cannot be calculated, and there is also a risk of selection bias and faulty recall.

    Secondary research

    Narrative review.

    An expert senior author writes about a particular field, condition or treatment, including an overview, and this information is fortified by his experience. The article is in a narrative format. Its limitation is that one cannot tell whether recommendations are based on author's clinical experience, available literature and why some studies were given more emphasis. It can be biased, with selective citation of reports that reinforce the authors' views of a topic.[ 12 ]

    Systematic review

    Systematic reviews methodically and comprehensively identify studies focused on a specified topic, appraise their methodology, summate the results, identify key findings and reasons for differences across studies, and cite limitations of current knowledge.[ 13 ] They adhere to reproducible methods and recommended guidelines.[ 14 ] The methods used to compile data are explicit and transparent, allowing the reader to gauge the quality of the review and the potential for bias.[ 15 ]

    A systematic review can be presented in text or graphic form. In graphic form, data of different trials can be plotted with the point estimate and 95% confidence interval for each study, presented on an individual line. A properly conducted systematic review presents the best available research evidence for a focused clinical question. The review team may obtain information, not available in the original reports, from the primary authors. This ensures that findings are consistent and generalisable across populations, environment, therapies and groups.[ 12 ] A systematic review attempts to reduce bias identification and studies selection for review, using a comprehensive search strategy and specifying inclusion criteria. The strength of a systematic review lies in the transparency of each phase and highlighting the merits of each decision made, while compiling information.

    Meta-analysis

    A review team compiles aggregate-level data in each primary study, and in some cases, data are solicited from each of the primary studies.[ 16 , 17 ] Although difficult to perform, individual patient meta-analyses offer advantages over aggregate-level analyses.[ 18 ] These mathematically pooled results are referred to as meta-analysis. Combining data from well-conducted primary studies provide a precise estimate of the “true effect.”[ 19 ] Pooling the samples of individual studies increases overall sample size, enhances statistical analysis power, reduces confidence interval and thereby improves statistical value.

    The structured process of Cochrane Collaboration systematic reviews has contributed to the improvement of their quality. For the meta-analysis to be definitive, the primary RCTs should have been conducted methodically. When the existing studies have important scientific and methodological limitations, such as smaller sized samples, the systematic review may identify where gaps exist in the available literature.[ 20 ] RCTs and systematic review of several randomised trials are less likely to mislead us, and thereby help judge whether an intervention is better.[ 2 ] Practice guidelines supported by large RCTs and meta-analyses are considered as ‘gold standard’ in EBM. This issue of IJA is accompanied by an editorial on Importance of EBM on research and practice (Guyat and Sriganesh 471_16).[ 21 ] The EBM pyramid grading the value of different types of research studies is shown in Figure 3 .

    An external file that holds a picture, illustration, etc.
Object name is IJA-60-626-g004.jpg

    The evidence-based medicine pyramid

    In the last decade, a number of studies and guidelines brought about path-breaking changes in anaesthesiology and critical care. Some guidelines such as the ‘Surviving Sepsis Guidelines-2004’[ 22 ] were later found to be flawed and biased. A number of large RCTs were rejected as their findings were erroneous. Another classic example is that of ENIGMA-I (Evaluation of Nitrous oxide In the Gas Mixture for Anaesthesia)[ 23 ] which implicated nitrous oxide for poor outcomes, but ENIGMA-II[ 24 , 25 ] conducted later, by the same investigators, declared it as safe. The rise and fall of the ‘tight glucose control’ regimen was similar.[ 26 ]

    Although RCTs are considered ‘gold standard’ in research, their status is at crossroads today. RCTs have conflicting interests and thus must be evaluated with careful scrutiny. EBM can promote evidence reflected in RCTs and meta-analyses. However, it cannot promulgate evidence not reflected in RCTs. Flawed RCTs and meta-analyses may bring forth erroneous recommendations. EBM thus should not be restricted to RCTs and meta-analyses but must involve tracking down the best external evidence to answer our clinical questions.

    Financial support and sponsorship

    Conflicts of interest.

    There are no conflicts of interest.

    • Open supplemental data
    • Reference Manager
    • Simple TEXT file

    People also looked at

    Original research article, learning scientific observation with worked examples in a digital learning environment.

    types of research studies in education

    • 1 Department Educational Sciences, Chair for Formal and Informal Learning, Technical University Munich School of Social Sciences and Technology, Munich, Germany
    • 2 Aquatic Systems Biology Unit, TUM School of Life Sciences, Technical University of Munich, Freising, Germany

    Science education often aims to increase learners’ acquisition of fundamental principles, such as learning the basic steps of scientific methods. Worked examples (WE) have proven particularly useful for supporting the development of such cognitive schemas and successive actions in order to avoid using up more cognitive resources than are necessary. Therefore, we investigated the extent to which heuristic WE are beneficial for supporting the acquisition of a basic scientific methodological skill—conducting scientific observation. The current study has a one-factorial, quasi-experimental, comparative research design and was conducted as a field experiment. Sixty two students of a German University learned about scientific observation steps during a course on applying a fluvial audit, in which several sections of a river were classified based on specific morphological characteristics. In the two experimental groups scientific observation was supported either via faded WE or via non-faded WE both presented as short videos. The control group did not receive support via WE. We assessed factual and applied knowledge acquisition regarding scientific observation, motivational aspects and cognitive load. The results suggest that WE promoted knowledge application: Learners from both experimental groups were able to perform the individual steps of scientific observation more accurately. Fading of WE did not show any additional advantage compared to the non-faded version in this regard. Furthermore, the descriptive results reveal higher motivation and reduced extraneous cognitive load within the experimental groups, but none of these differences were statistically significant. Our findings add to existing evidence that WE may be useful to establish scientific competences.

    1 Introduction

    Learning in science education frequently involves the acquisition of basic principles or generalities, whether of domain-specific topics (e.g., applying a mathematical multiplication rule) or of rather universal scientific methodologies (e.g., performing the steps of scientific observation) ( Lunetta et al., 2007 ). Previous research has shown that worked examples (WE) can be considered particularly useful for developing such cognitive schemata during learning to avoid using more cognitive resources than necessary for learning successive actions ( Renkl et al., 2004 ; Renkl, 2017 ). WE consist of the presentation of a problem, consecutive solution steps and the solution itself. This is especially advantageous in initial cognitive skill acquisition, i.e., for novice learners with low prior knowledge ( Kalyuga et al., 2001 ). With growing knowledge, fading WE can lead from example-based learning to independent problem-solving ( Renkl et al., 2002 ). Preliminary work has shown the advantage of WE in specific STEM domains like mathematics ( Booth et al., 2015 ; Barbieri et al., 2021 ), but less studies have investigated their impact on the acquisition of basic scientific competencies that involve heuristic problem-solving processes (scientific argumentation, Schworm and Renkl, 2007 ; Hefter et al., 2014 ; Koenen et al., 2017 ). In the realm of natural sciences, various basic scientific methodologies are employed to acquire knowledge, such as experimentation or scientific observation ( Wellnitz and Mayer, 2013 ). During the pursuit of knowledge through scientific inquiry activities, learners may encounter several challenges and difficulties. Similar to the hurdles faced in experimentation, where understanding the criteria for appropriate experimental design, including the development, measurement, and evaluation of results, is crucial ( Sirum and Humburg, 2011 ; Brownell et al., 2014 ; Dasgupta et al., 2014 ; Deane et al., 2014 ), scientific observation additionally presents its own set of issues. In scientific observation, e.g., the acquisition of new insights may be somewhat incidental due to spontaneous and uncoordinated observations ( Jensen, 2014 ). To address these challenges, it is crucial to provide instructional support, including the use of WE, particularly when observations are carried out in a more self-directed manner.

    For this reason, the aim of the present study was to determine the usefulness of digitally presented WE to support the acquisition of a basic scientific methodological skill—conducting scientific observations—using a digital learning environment. In this regard, this study examined the effects of different forms of digitally presented WE (non-faded vs. faded) on students’ cognitive and motivational outcomes and compared them to a control group without WE. Furthermore, the combined perspective of factual and applied knowledge, as well as motivational and cognitive aspects, represent further value added to the study.

    2 Theoretical background

    2.1 worked examples.

    WE have been commonly used in the fields of STEM education (science, technology, engineering, and mathematics) ( Booth et al., 2015 ). They consist of a problem statement, the steps to solve the problem, and the solution itself ( Atkinson et al., 2000 ; Renkl et al., 2002 ; Renkl, 2014 ). The success of WE can be explained by their impact on cognitive load (CL) during learning, based on assumptions from Cognitive Load Theory ( Sweller, 2006 ).

    Learning with WE is considered time-efficient, effective, and superior to problem-based learning (presentation of the problem without demonstration of solution steps) when it comes to knowledge acquisition and transfer (WE-effect, Atkinson et al., 2000 ; Van Gog et al., 2011 ). Especially WE can help by reducing the extraneous load (presentation and design of the learning material) and, in turn, can lead to an increase in germane load (effort of the learner to understand the learning material) ( Paas et al., 2003 ; Renkl, 2014 ). With regard to intrinsic load (difficulty and complexity of the learning material), it is still controversially discussed if it can be altered by instructional design, e.g., WE ( Gerjets et al., 2004 ). WE have a positive effect on learning and knowledge transfer, especially for novices, as the step-by-step presentation of the solution requires less extraneous mental effort compared to problem-based learning ( Sweller et al., 1998 ; Atkinson et al., 2000 ; Bokosmaty et al., 2015 ). With growing knowledge, WE can lose their advantages (due to the expertise-reversal effect), and scaffolding learning via faded WE might be more successful for knowledge gain and transfer ( Renkl, 2014 ). Faded WE are similar to complete WE, but fade out solution steps as knowledge and competencies grow. Faded WE enhance near-knowledge transfer and reduce errors compared to non-faded WE ( Renkl et al., 2000 ).

    In addition, the reduction of intrinsic and extraneous CL by WE also has an impact on learner motivation, such as interest ( Van Gog and Paas, 2006 ). Um et al. (2012) showed that there is a strong positive correlation between germane CL and the motivational aspects of learning, like satisfaction and emotion. Gupta (2019) mentions a positive correlation between CL and interest. Van Harsel et al. (2019) found that WE positively affect learning motivation, while no such effect was found for problem-solving. Furthermore, learning with WE increases the learners’ belief in their competence in completing a task. In addition, fading WE can lead to higher motivation for more experienced learners, while non-faded WE can be particularly motivating for learners without prior knowledge ( Paas et al., 2005 ). In general, fundamental motivational aspects during the learning process, such as situational interest ( Lewalter and Knogler, 2014 ) or motivation-relevant experiences, like basic needs, are influenced by learning environments. At the same time, their use also depends on motivational characteristics of the learning process, such as self-determined motivation ( Deci and Ryan, 2012 ). Therefore, we assume that learning with WE as a relevant component of a learning environment might also influence situational interest and basic needs.

    2.1.1 Presentation of worked examples

    WE are frequently used in digital learning scenarios ( Renkl, 2014 ). When designing WE, the application via digital learning media can be helpful, as their content can be presented in different ways (video, audio, text, and images), tailored to the needs of the learners, so that individual use is possible according to their own prior knowledge or learning pace ( Mayer, 2001 ). Also, digital media can present relevant information in a timely, motivating, appealing and individualized way and support learning in an effective and needs-oriented way ( Mayer, 2001 ). The advantages of using digital media in designing WE have already been shown in previous studies. Dart et al. (2020) presented WE as short videos (WEV). They report that the use of WEV leads to increased student satisfaction and more positive attitudes. Approximately 90% of the students indicated an active learning approach when learning with the WEV. Furthermore, the results show that students improved their content knowledge through WEV and that they found WEV useful for other courses as well.

    Another study ( Kay and Edwards, 2012 ) presented WE as video podcasts. Here, the advantages of WE regarding self-determined learning in terms of learning location, learning time, and learning speed were shown. Learning performance improved significantly after use. The step-by-step, easy-to-understand explanations, the diagrams, and the ability to determine the learning pace by oneself were seen as beneficial.

    Multimedia WE can also be enhanced with self-explanation prompts ( Berthold et al., 2009 ). Learning from WE with self-explanation prompts was shown to be superior to other learning methods, such as hypertext learning and observational learning.

    In addition to presenting WE in different medial ways, WE can also comprise different content domains.

    2.1.2 Content and context of worked examples

    Regarding the content of WE, algorithmic and heuristic WE, as well as single-content and double-content WE, can be distinguished ( Reiss et al., 2008 ; Koenen et al., 2017 ; Renkl, 2017 ). Algorithmic WE are traditionally used in the very structured mathematical–physical field. Here, an algorithm with very specific solution steps is to learn, for example, in probability calculation ( Koenen et al., 2017 ). In this study, however, we focus on heuristic double-content WE. Heuristic WE in science education comprise fundamental scientific working methods, e.g., conducting experiments ( Koenen et al., 2017 ). Furthermore, double-content WE contain two learning domains that are relevant for the learning process: (1) the learning domain describes the primarily to be learned abstract process or concept, e.g., scientific methodologies like observation (see section 2.2), while (2) the exemplifying domain consists of the content that is necessary to teach this process or concept, e.g., mapping of river structure ( Renkl et al., 2009 ).

    Depending on the WE content to be learned, it may be necessary for learning to take place in different settings. This can be in a formal or informal learning setting or a non-formal field setting. In this study, the focus is on learning scientific observation (learning domain) through river structure mapping (exemplary domain), which takes place with the support of digital media in a formal (university) setting, but in an informal context (nature).

    2.2 Scientific observation

    Scientific observation is fundamental to all scientific activities and disciplines ( Kohlhauf et al., 2011 ). Scientific observation must be clearly distinguished from everyday observation, where observation is purely a matter of noticing and describing specific characteristics ( Chinn and Malhotra, 2001 ). In contrast to this everyday observation, scientific observation as a method of knowledge acquisition can be described as a rather complex activity, defined as the theory-based, systematic and selective perception of concrete systems and processes without any fundamental manipulation ( Wellnitz and Mayer, 2013 ). Wellnitz and Mayer (2013) described the scientific observation process via six steps: (1) formulation of the research question (s), (2) deduction of the null hypothesis and the alternative hypothesis, (3) planning of the research design, (4) conducting the observation, (5) analyzing the data, and (6) answering the research question(s) on this basis. Only through reliable and qualified observation, valid data can be obtained that provide solid scientific evidence ( Wellnitz and Mayer, 2013 ).

    Since observation activities are not trivial and learners often observe without generating new knowledge or connecting their observations to scientific explanations and thoughts, it is important to provide support at the related cognitive level, so that observation activities can be conducted in a structured way according to pre-defined criteria ( Ford, 2005 ; Eberbach and Crowley, 2009 ). Especially during field-learning experiences, scientific observation is often spontaneous and uncoordinated, whereby random discoveries result in knowledge gain ( Jensen, 2014 ).

    To promote successful observing in rather unstructured settings like field trips, instructional support for the observation process seems useful. To guide observation activities, digitally presented WE seem to be an appropriate way to introduce learners to the individual steps of scientific observation using concrete examples.

    2.3 Research questions and hypothesis

    The present study investigates the effect of digitally presented double-content WE that supports the mapping of a small Bavarian river by demonstrating the steps of scientific observation. In this analysis, we focus on the learning domain of the WE and do not investigate the exemplifying domain in detail. Distinct ways of integrating WE in the digital learning environment (faded WE vs. non-faded WE) are compared with each other and with a control group (no WE). The aim is to examine to what extent differences between those conditions exist with regard to (RQ1) learners’ competence acquisition [acquisition of factual knowledge about the scientific observation method (quantitative data) and practical application of the scientific observation method (quantified qualitative data)], (RQ2) learners’ motivation (situational interest and basic needs), and (RQ3) CL. It is assumed that (Hypothesis 1), the integration of WE (faded and non-faded) leads to significantly higher competence acquisition (factual and applied knowledge), significantly higher motivation and significantly lower extraneous CL as well as higher germane CL during the learning process compared to a learning environment without WE. No differences between the conditions are expected regarding intrinsic CL. Furthermore, it is assumed (Hypothesis 2) that the integration of faded WE leads to significantly higher competence acquisition, significantly higher motivation, and lower extraneous CL as well as higher germane CL during the learning processes compared to non-faded WE. No differences between the conditions are expected with regard to intrinsic CL.

    The study took place during the field trips of a university course on the application of a fluvial audit (FA) using the German working aid for mapping the morphology of rivers and their floodplains ( Bayerisches Landesamt für Umwelt, 2019 ). FA is the leading fluvial geomorphological tool for application to data collection contiguously along all watercourses of interest ( Walker et al., 2007 ). It is widely used because it is a key example of environmental conservation and monitoring that needs to be taught to students of selected study programs; thus, knowing about the most effective ways of learning is of high practical relevance.

    3.1 Sample and design

    3.1.1 sample.

    The study was conducted with 62 science students and doctoral students of a German University (age M  = 24.03 years; SD  = 4.20; 36 females; 26 males). A total of 37 participants had already conducted a scientific observation and would rate their knowledge in this regard at a medium level ( M  = 3.32 out of 5; SD  = 0.88). Seven participants had already conducted an FA and would rate their knowledge in this regard at a medium level ( M  = 3.14 out of 5; SD  = 0.90). A total of 25 participants had no experience at all. Two participants had to be excluded from the sample afterward because no posttest results were available.

    3.1.2 Design

    The study has a 1-factorial quasi-experimental comparative research design and is conducted as a field experiment using a pre/posttest design. Participants were randomly assigned to one of three conditions: no WE ( n  = 20), faded WE ( n  = 20), and non-faded WE ( n  = 20).

    3.2 Implementation and material

    3.2.1 implementation.

    The study started with an online kick-off meeting where two lecturers informed all students within an hour about the basics regarding the assessment of the structural integrity of the study river and the course of the field trip days to conduct an FA. Afterward, within 2 weeks, students self-studied via Moodle the FA following the German standard method according to the scoresheets of Bayerisches Landesamt für Umwelt (2019) . This independent preparation using the online presented documents was a necessary prerequisite for participation in the field days and was checked in the pre-testing. The preparatory online documents included six short videos and four PDF files on the content, guidance on the German protocol of the FA, general information on river landscapes, information about anthropogenic changes in stream morphology and the scoresheets for applying the FA. In these sheets, the river and its floodplain are subdivided into sections of 100 m in length. Each of these sections is evaluated by assessing 21 habitat factors related to flow characteristics and structural variability. The findings are then transferred into a scoring system for the description of structural integrity from 1 (natural) to 7 (highly modified). Habitat factors have a decisive influence on the living conditions of animals and plants in and around rivers. They included, e.g., variability in water depth, stream width, substratum diversity, or diversity of flow velocities.

    3.2.2 Materials

    On the field trip days, participants were handed a tablet and a paper-based FA worksheet (last accessed 21st September 2022). 1 This four-page assessment sheet was accompanied by a digital learning environment presented on Moodle that instructed the participants on mapping the water body structure and guided the scientific observation method. All three Moodle courses were identical in structure and design; the only difference was the implementation of the WE. Below, the course without WE are described first. The other two courses have an identical structure, but contain additional WE in the form of learning videos.

    3.2.3 No worked example

    After a short welcome and introduction to the course navigation, the FA started with the description of a short hypothetical scenario: Participants should take the role of an employee of an urban planning office that assesses the ecomorphological status of a small river near a Bavarian city. The river was divided into five sections that had to be mapped separately. The course was structured accordingly. At the beginning of each section, participants had to formulate and write down a research question, and according to hypotheses regarding the ecomorphological status of the river’s section, they had to collect data in this regard via the mapping sheet and then evaluate their data and draw a conclusion. Since this course serves as a control group, no WE videos supporting the scientific observation method were integrated. The layout of the course is structured like a book, where it is not possible to scroll back. This is important insofar as the participants do not have the possibility to revisit information in order to keep the conditions comparable as well as distinguishable.

    3.2.4 Non-faded worked example

    In the course with no-faded WE, three instructional videos are shown for each of the five sections. In each of the three videos, two steps of the scientific observation method are presented so that, finally, all six steps of scientific observation are demonstrated. The mapping of the first section starts after the general introduction (as described above) with the instruction to work on the first two steps of scientific observation: the formulation of a research question and hypotheses. To support this, a video of about 4 min explains the features of scientific sound research questions and hypotheses. To this aim, a practical example, including explanations and tips, is given regarding the formulation of research questions and hypotheses for this section (e.g., “To what extent does the building development and the closeness of the path to the water body have an influence on the structure of the water body?” Alternative hypothesis: It is assumed that the housing development and the closeness of the path to the water body have a negative influence on the water body structure. Null hypothesis: It is assumed that the housing development and the closeness of the path to the watercourse have no negative influence on the watercourse structure.). Participants should now formulate their own research questions and hypotheses, write them down in a text field at the end of the page, and then skip to the next page. The next two steps of scientific observation, planning and conducting, are explained in a short 4-min video. To this aim, a practical example including explanations and tips is given regarding planning and conducting scientific for this section (e.g., “It’s best to go through each evaluation category carefully one by one that way you are sure not to forget anything!”). Now, participants were asked to collect data for the first section using their paper-based FA worksheet. Participants individually surveyed the river and reported their results in the mapping sheet by ticking the respective boxes in it. After collecting this data, they returned to the digital learning environment to learn how to use these data by studying the last two steps of scientific observation, evaluation, and conclusion. The third 4-min video explained how to evaluate and interpret collected data. For this purpose, a practical example with explanations and tips is given regarding evaluating and interpreting data for this section (e.g., “What were the individual points that led to the assessment? Have there been points that were weighted more than others? Remember the introduction video!”). At the end of the page, participants could answer their before-stated research questions and hypotheses by evaluating their collected data and drawing a conclusion. This brings participants to the end of the first mapping section. Afterward, the cycle begins again with the second section of the river that has to be mapped. Again, participants had to conduct the steps of scientific observation, guided by WE videos, explaining the steps in slightly different wording or with different examples. A total of five sections are mapped, in which the structure of the learning environment and the videos follow the same procedure.

    3.2.5 Faded worked example

    The digital learning environment with the faded WE follow the same structure as the version with the non-faded WE. However, in this version, the information in the WE videos is successively reduced. In the first section, all three videos are identical to the version with the non-faded WE. In the second section, faded content was presented as follows: the tip at the end was omitted in all three videos. In the third section, the tip and the practical example were omitted. In the fourth and fifth sections, no more videos were presented, only the work instructions.

    3.3 Procedure

    The data collection took place on four continuous days on the university campus, with a maximum group size of 15 participants on each day. The students were randomly assigned to one of the three conditions (no WE vs. faded WE vs. non-faded WE). After a short introduction to the procedure, the participants were handed the paper-based FA worksheet and one tablet per person. Students scanned the QR code on the first page of the worksheet that opened the pretest questionnaire, which took about 20 min to complete. After completing the questionnaire, the group walked for about 15 min to the nearby small river that was to be mapped. Upon arrival, there was first a short introduction to the digital learning environment and a check that the login (via university account on Moodle) worked. During the next 4 h, the participants individually mapped five segments of the river using the cartography worksheet. They were guided through the steps of scientific observation using the digital learning environment on the tablet. The results of their scientific observation were logged within the digital learning environment. At the end of the digital learning environment, participants were directed to the posttest via a link. After completing the test, the tablets and mapping sheets were returned. Overall, the study took about 5 h per group each day.

    3.4 Instruments

    In the pretest, sociodemographic data (age and gender), the study domain and the number of study semesters were collected. Additionally, the previous scientific observation experience and the estimation of one’s own ability in this regard were assessed. For example, it was asked whether scientific observation had already been conducted and, if so, how the abilities were rated on a 5-point scale from very low to very high. Preparation for the FA on the basis of the learning material was assessed: Participants were asked whether they had studied all six videos and all four PDF documents, with the response options not at all, partially, and completely. Furthermore, a factual knowledge test about scientific observation and questions about self-determination theory was administered. The posttest used the same knowledge test, and additional questions on basic needs, situational interest, measures of CL and questions about the usefulness of the WE. All scales were presented online, and participants reached the questionnaire via QR code.

    3.4.1 Scientific observation competence acquisition

    For the factual knowledge (quantitative assessment of the scientific observation competence), a single-choice knowledge test with 12 questions was developed and used as pre- and posttest with a maximum score of 12 points. It assesses the learners’ knowledge of the scientific observation method regarding the steps of scientific observation, e.g., formulating research questions and hypotheses or developing a research design. The questions are based on Wahser (2008 , adapted by Koenen, 2014 ) and adapted to scientific observation: “Although you are sure that you have conducted the scientific observation correctly, an unexpected result turns up. What conclusion can you draw?” Each question has four answer options (one of which is correct) and, in addition, one “I do not know” option.

    For the applied knowledge (quantified qualitative assessment of the scientific observation competence), students’ scientific observations written in the digital learning environment were analyzed. A coding scheme was used with the following codes: 0 = insufficient (text field is empty or includes only insufficient key points), 1 = sufficient (a research question and no hypotheses or research question and inappropriate hypotheses are stated), 2 = comprehensive (research question and appropriate hypothesis or research question and hypotheses are stated, but, e.g., incorrect null hypothesis), 3 = very comprehensive (correct research question, hypothesis and null hypothesis are stated). One example of a very comprehensive answer regarding the research question and hypothesis is: To what extent does the lack of riparian vegetation have an impact on water body structure? Hypothesis: The lack of shore vegetation has a negative influence on the water body structure. Null hypothesis: The lack of shore vegetation has no influence on the water body structure. Afterward, a sum score was calculated for each participant. Five times, a research question and hypotheses (steps 1 and 2 in the observation process) had to be formulated (5 × max. 3 points = 15 points), and five times, the research questions and hypotheses had to be answered (steps 5 and 6 in the observation process: evaluation and conclusion) (5 × max. 3 points = 15 points). Overall, participants could reach up to 30 points. Since the observation and evaluation criteria in data collection and analysis were strongly predetermined by the scoresheet, steps 3 and 4 of the observation process (planning and conducting) were not included in the analysis.

    All 600 cases (60 participants, each 10 responses to code) were coded by the first author. For verification, 240 cases (24 randomly selected participants, eight from each course) were cross-coded by an external coder. In 206 of the coded cases, the raters agreed. The cases in which the raters did not agree were discussed together, and a solution was found. This results in Cohen’s κ = 0.858, indicating a high to very high level of agreement. This indicates that the category system is clearly formulated and that the individual units of analysis could be correctly assigned.

    3.4.2 Self-determination index

    For the calculation of the self-determination index (SDI-index), Thomas and Müller (2011) scale for self-determination was used in the pretest. The scale consists of four subscales: intrinsic motivation (five items; e.g., I engage with the workshop content because I enjoy it; reliability of alpha = 0.87), identified motivation (four items; e.g., I engage with the workshop content because it gives me more options when choosing a career; alpha = 0.84), introjected motivation (five items; e.g., I engage with the workshop content because otherwise I would have a guilty feeling; alpha = 0.79), and external motivation (three items, e.g., I engage with the workshop content because I simply have to learn it; alpha = 0.74). Participants could indicate their answers on a 5-point Likert scale ranging from 1 = completely disagree to 5 = completely agree. To calculate the SDI-index, the sum of the self-determined regulation styles (intrinsic and identified) is subtracted from the sum of the external regulation styles (introjected and external), where intrinsic and external regulation are scored two times ( Thomas and Müller, 2011 ).

    3.4.3 Motivation

    Basic needs were measured in the posttest with the scale by Willems and Lewalter (2011) . The scale consists of three subscales: perceived competence (four items; e.g., during the workshop, I felt that I could meet the requirements; alpha = 0.90), perceived autonomy (five items; e.g., during the workshop, I felt that I had a lot of freedom; alpha = 0.75), and perceived autonomy regarding personal wishes and goals (APWG) (four items; e.g., during the workshop, I felt that the workshop was how I wish it would be; alpha = 0.93). We added all three subscales to one overall basic needs scale (alpha = 0.90). Participants could indicate their answers on a 5-point Likert scale ranging from 1 = completely disagree to 5 = completely agree.

    Situational interest was measured in the posttest with the 12-item scale by Lewalter and Knogler (2014 ; Knogler et al., 2015 ; Lewalter, 2020 ; alpha = 0.84). The scale consists of two subscales: catch (six items; e.g., I found the workshop exciting; alpha = 0.81) and hold (six items; e.g., I would like to learn more about parts of the workshop; alpha = 0.80). Participants could indicate their answers on a 5-point Likert scale ranging from 1 = completely disagree to 5 = completely agree.

    3.4.4 Cognitive load

    In the posttest, CL was used to examine the mental load during the learning process. The intrinsic CL (three items; e.g., this task was very complex; alpha = 0.70) and extraneous CL (three items; e.g., in this task, it is difficult to identify the most important information; alpha = 0.61) are measured with the scales from Klepsch et al. (2017) . The germane CL (two items; e.g., the learning session contained elements that supported me to better understand the learning material; alpha = 0.72) is measured with the scale from Leppink et al. (2013) . Participants could indicate their answers on a 5-point Likert scale ranging from 1 = completely disagree to 5 = completely agree.

    3.4.5 Attitudes toward worked examples

    To measure how effective participants rated the WE, we used two scales related to the WE videos as instructional support. The first scale from Renkl (2001) relates to the usefulness of WE. The scale consists of four items (e.g., the explanations were helpful; alpha = 0.71). Two items were recoded because they were formulated negatively. The second scale is from Wachsmuth (2020) and relates to the participant’s evaluation of the WE. The scale consists of nine items (e.g., I always did what was explained in the learning videos; alpha = 0.76). Four items were recoded because they were formulated negatively. Participants could indicate their answers on a 5-point Likert scale ranging from 1 = completely disagree to 5 = completely agree.

    3.5 Data analysis

    An ANOVA was used to calculate if the variable’s prior knowledge and SDI index differed between the three groups. However, as no significant differences between the conditions were found [prior factual knowledge: F (2, 59) = 0.15, p  = 0.865, η 2  = 0.00 self-determination index: F (2, 59) = 0.19, p  = 0.829, η 2  = 0.00], they were not included as covariates in subsequent analyses.

    Furthermore, a repeated measure, one-way analysis of variance (ANOVA), was conducted to compare the three treatment groups (no WE vs. faded WE vs. non-faded WE) regarding the increase in factual knowledge about the scientific observation method from pretest to posttest.

    A MANOVA (multivariate analysis) was calculated with the three groups (no WE vs. non-faded WE vs. faded WE) as a fixed factor and the dependent variables being the practical application of the scientific observation method (first research question), situational interest, basic needs (second research question), and CL (third research question).

    Additionally, to determine differences in applied knowledge even among the three groups, Bonferroni-adjusted post-hoc analyses were conducted.

    The descriptive statistics between the three groups in terms of prior factual knowledge about the scientific observation method and the self-determination index are shown in Table 1 . The descriptive statistics revealed only small, non-significant differences between the three groups in terms of factual knowledge.

    www.frontiersin.org

    Table 1 . Means (standard deviations) of factual knowledge tests (pre- and posttest) and self-determination index for the three different groups.

    The results of the ANOVA revealed that the overall increase in factual knowledge from pre- to posttest just misses significance [ F (1, 57) = 3.68, p  = 0.060, η 2  = 0 0.06]. Furthermore, no significant differences between the groups were found regarding the acquisition of factual knowledge from pre- to posttest [ F (2, 57) = 2.93, p  = 0.062, η 2  = 0.09].

    An analysis of the descriptive statistics showed that the largest differences between the groups were found in applied knowledge (qualitative evaluation) and extraneous load (see Table 2 ).

    www.frontiersin.org

    Table 2 . Means (standard deviations) of dependent variables with the three different groups.

    Results of the MANOVA revealed significant overall differences between the three groups [ F (12, 106) = 2.59, p  = 0.005, η 2  = 0.23]. Significant effects were found for the application of knowledge [ F (2, 57) = 13.26, p  = <0.001, η 2  = 0.32]. Extraneous CL just missed significance [ F (2, 57) = 2.68, p  = 0.065, η 2  = 0.09]. There were no significant effects for situational interest [ F (2, 57) = 0.44, p  = 0.644, η 2  = 0.02], basic needs [ F (2, 57) = 1.22, p  = 0.302, η 2  = 0.04], germane CL [ F (2, 57) = 2.68, p  = 0.077, η 2  = 0.09], and intrinsic CL [ F (2, 57) = 0.28, p  = 0.757, η 2  = 0.01].

    Bonferroni-adjusted post hoc analysis revealed that the group without WE had significantly lower scores in the evaluation of the applied knowledge than the group with non-faded WE ( p  = <0.001, M diff  = −8.90, 95% CI [−13.47, −4.33]) and then the group with faded WE ( p  = <0.001, M diff  = −7.40, 95% CI [−11.97, −2.83]). No difference was found between the groups with faded and non-faded WE ( p  = 1.00, M diff  = −1.50, 95% CI [−6.07, 3.07]).

    The descriptive statistics regarding the perceived usefulness of WE and participants’ evaluation of the WE revealed that the group with the faded WE rated usefulness slightly higher than the participants with non-faded WE and also reported a more positive evaluation. However, the results of a MANOVA revealed no significant overall differences [ F (2, 37) = 0.32, p  = 0.732, η 2  = 0 0.02] (see Table 3 ).

    www.frontiersin.org

    Table 3 . Means (standard deviations) of dependent variables with the three different groups.

    5 Discussion

    This study investigated the use of WE to support students’ acquisition of science observation. Below, the research questions are answered, and the implications and limitations of the study are discussed.

    5.1 Results on factual and applied knowledge

    In terms of knowledge gain (RQ1), our findings revealed no significant differences in participants’ results of the factual knowledge test both across all three groups and specifically between the two experimental groups. These results are in contradiction with related literature where WE had a positive impact on knowledge acquisition ( Renkl, 2014 ) and faded WE are considered to be more effective in knowledge acquisition and transfer, in contrast to non-faded WE ( Renkl et al., 2000 ; Renkl, 2014 ). A limitation of the study is the fact that the participants already scored very high on the pretest, so participation in the intervention would likely not yield significant knowledge gains due to ceiling effects ( Staus et al., 2021 ). Yet, nearly half of the students reported being novices in the field prior to the study, suggesting that the difficulty of some test items might have been too low. Here, it would be important to revise the factual knowledge test, e.g., the difficulty of the distractors in further study.

    Nevertheless, with regard to application knowledge, the results revealed large significant differences: Participants of the two experimental groups performed better in conducting scientific observation steps than participants of the control group. In the experimental groups, the non-faded WE group performed better than the faded WE group. However, the absence of significant differences between the two experimental groups suggests that faded and non-faded WE used as double-content WE are suitable to teach applied knowledge about scientific observation in the learning domain ( Koenen, 2014 ). Furthermore, our results differ from the findings of Renkl et al. (2000) , in which the faded version led to the highest knowledge transfer. Despite the fact that the non-faded WE performed best in our study, the faded version of the WE was also appropriate to improve learning, confirming the findings of Renkl (2014) and Hesser and Gregory (2015) .

    5.2 Results on learners’ motivation

    Regarding participants’ motivation (RQ2; situational interest and basic needs), no significant differences were found across all three groups or between the two experimental groups. However, descriptive results reveal slightly higher motivation in the two experimental groups than in the control group. In this regard, our results confirm existing literature on a descriptive level showing that WE lead to higher learning-relevant motivation ( Paas et al., 2005 ; Van Harsel et al., 2019 ). Additionally, both experimental groups rated the usefulness of the WE as high and reported a positive evaluation of the WE. Therefore, we assume that even non-faded WE do not lead to over-instruction. Regarding the descriptive tendency, a larger sample might yield significant results and detect even small effects in future investigations. However, because this study also focused on comprehensive qualitative data analysis, it was not possible to evaluate a larger sample in this study.

    5.3 Results on cognitive load

    Finally, CL did not vary significantly across all three groups (RQ3). However, differences in extraneous CL just slightly missed significance. In descriptive values, the control group reported the highest extrinsic and lowest germane CL. The faded WE group showed the lowest extrinsic CL and a similar germane CL as the non-faded WE group. These results are consistent with Paas et al. (2003) and Renkl (2014) , reporting that WE can help to reduce the extraneous CL and, in return, lead to an increase in germane CL. Again, these differences were just above the significance level, and it would be advantageous to retest with a larger sample to detect even small effects.

    Taken together, our results only partially confirm H1: the integration of WE (both faded and non-faded WE) led to a higher acquisition of application knowledge than the control group without WE, but higher factual knowledge was not found. Furthermore, higher motivation or different CL was found on a descriptive level only. The control group provided the basis for comparison with the treatment in order to investigate if there is an effect at all and, if so, how large the effect is. This is an important point to assess whether the effort of implementing WE is justified. Additionally, regarding H2, our results reveal no significant differences between the two WE conditions. We assume that the high complexity of the FA could play a role in this regard, which might be hard to handle, especially for beginners, so learners could benefit from support throughout (i.e., non-faded WE).

    In addition to the limitations already mentioned, it must be noted that only one exemplary topic was investigated, and the sample only consisted of students. Since only the learning domain of the double-content WE was investigated, the exemplifying domain could also be analyzed, or further variables like motivation could be included in further studies. Furthermore, the influence of learners’ prior knowledge on learning with WE could be investigated, as studies have found that WE are particularly beneficial in the initial acquisition of cognitive skills ( Kalyuga et al., 2001 ).

    6 Conclusion

    Overall, the results of the current study suggest a beneficial role for WE in supporting the application of scientific observation steps. A major implication of these findings is that both faded and non-faded WE should be considered, as no general advantage of faded WE over non-faded WE was found. This information can be used to develop targeted interventions aimed at the support of scientific observation skills.

    Data availability statement

    The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

    Ethics statement

    Ethical approval was not required for the study involving human participants in accordance with the local legislation and institutional requirements. Written informed consent to participate in this study was not required from the participants in accordance with the national legislation and the institutional requirements.

    Author contributions

    ML: Writing – original draft. SM: Writing – review & editing. JP: Writing – review & editing. JG: Writing – review & editing. DL: Writing – review & editing.

    The author(s) declare that no financial support was received for the research, authorship, and/or publication of this article.

    Conflict of interest

    The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

    Publisher’s note

    All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

    Supplementary material

    The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/feduc.2024.1293516/full#supplementary-material

    1. ^ https://www.lfu.bayern.de/wasser/gewaesserstrukturkartierung/index.htm

    Atkinson, R. K., Derry, S. J., Renkl, A., and Wortham, D. (2000). Learning from examples: instructional principles from the worked examples research. Rev. Educ. Res. 70, 181–214. doi: 10.3102/00346543070002181

    Crossref Full Text | Google Scholar

    Barbieri, C. A., Booth, J. L., Begolli, K. N., and McCann, N. (2021). The effect of worked examples on student learning and error anticipation in algebra. Instr. Sci. 49, 419–439. doi: 10.1007/s11251-021-09545-6

    Bayerisches Landesamt für Umwelt. (2019). Gewässerstrukturkartierung von Fließgewässern in Bayern – Erläuterungen zur Erfassung und Bewertung. (Water structure mapping of flowing waters in Bavaria - Explanations for recording and assessment) . Available at: https://www.bestellen.bayern.de/application/eshop_app000005?SID=1020555825&ACTIONxSESSxSHOWPIC(BILDxKEY:%27lfu_was_00152%27,BILDxCLASS:%27Artikel%27,BILDxTYPE:%27PDF%27)

    Google Scholar

    Berthold, K., Eysink, T. H., and Renkl, A. (2009). Assisting self-explanation prompts are more effective than open prompts when learning with multiple representations. Instr. Sci. 37, 345–363. doi: 10.1007/s11251-008-9051-z

    Bokosmaty, S., Sweller, J., and Kalyuga, S. (2015). Learning geometry problem solving by studying worked examples: effects of learner guidance and expertise. Am. Educ. Res. J. 52, 307–333. doi: 10.3102/0002831214549450

    Booth, J. L., McGinn, K., Young, L. K., and Barbieri, C. A. (2015). Simple practice doesn’t always make perfect. Policy Insights Behav. Brain Sci. 2, 24–32. doi: 10.1177/2372732215601691

    Brownell, S. E., Wenderoth, M. P., Theobald, R., Okoroafor, N., Koval, M., Freeman, S., et al. (2014). How students think about experimental design: novel conceptions revealed by in-class activities. Bioscience 64, 125–137. doi: 10.1093/biosci/bit016

    Chinn, C. A., and Malhotra, B. A. (2001). “Epistemologically authentic scientific reasoning” in Designing for science: implications from everyday, classroom, and professional settings . eds. K. Crowley, C. D. Schunn, and T. Okada (Mahwah, NJ: Lawrence Erlbaum), 351–392.

    Dart, S., Pickering, E., and Dawes, L. (2020). Worked example videos for blended learning in undergraduate engineering. AEE J. 8, 1–22. doi: 10.18260/3-1-1153-36021

    Dasgupta, A., Anderson, T. R., and Pelaez, N. J. (2014). Development and validation of a rubric for diagnosing students’ experimental design knowledge and difficulties. CBE Life Sci. Educ. 13, 265–284. doi: 10.1187/cbe.13-09-0192

    PubMed Abstract | Crossref Full Text | Google Scholar

    Deane, T., Nomme, K. M., Jeffery, E., Pollock, C. A., and Birol, G. (2014). Development of the biological experimental design concept inventory (BEDCI). CBE Life Sci. Educ. 13, 540–551. doi: 10.1187/cbe.13-11-0218

    Deci, E. L., and Ryan, R. M. (2012). Self-determination theory. In P. A. M. LangeVan, A. W. Kruglanski, and E. T. Higgins (Eds.), Handbook of theories of social psychology , 416–436.

    Eberbach, C., and Crowley, K. (2009). From everyday to scientific observation: how children learn to observe the Biologist’s world. Rev. Educ. Res. 79, 39–68. doi: 10.3102/0034654308325899

    Ford, D. (2005). The challenges of observing geologically: third graders’ descriptions of rock and mineral properties. Sci. Educ. 89, 276–295. doi: 10.1002/sce.20049

    Gerjets, P., Scheiter, K., and Catrambone, R. (2004). Designing instructional examples to reduce intrinsic cognitive load: molar versus modular presentation of solution procedures. Instr. Sci. 32, 33–58. doi: 10.1023/B:TRUC.0000021809.10236.71

    Gupta, U. (2019). Interplay of germane load and motivation during math problem solving using worked examples. Educ. Res. Theory Pract. 30, 67–71.

    Hefter, M. H., Berthold, K., Renkl, A., Riess, W., Schmid, S., and Fries, S. (2014). Effects of a training intervention to foster argumentation skills while processing conflicting scientific positions. Instr. Sci. 42, 929–947. doi: 10.1007/s11251-014-9320-y

    Hesser, T. L., and Gregory, J. L. (2015). Exploring the Use of Faded Worked Examples as a Problem Solving Approach for Underprepared Students. High. Educ. Stud. 5, 36–46.

    Jensen, E. (2014). Evaluating children’s conservation biology learning at the zoo. Conserv. Biol. 28, 1004–1011. doi: 10.1111/cobi.12263

    Kalyuga, S., Chandler, P., Tuovinen, J., and Sweller, J. (2001). When problem solving is superior to studying worked examples. J. Educ. Psychol. 93, 579–588. doi: 10.1037/0022-0663.93.3.579

    Kay, R. H., and Edwards, J. (2012). Examining the use of worked example video podcasts in middle school mathematics classrooms: a formative analysis. Can. J. Learn. Technol. 38, 1–20. doi: 10.21432/T2PK5Z

    Klepsch, M., Schmitz, F., and Seufert, T. (2017). Development and validation of two instruments measuring intrinsic, extraneous, and germane cognitive load. Front. Psychol. 8:1997. doi: 10.3389/fpsyg.2017.01997

    Knogler, M., Harackiewicz, J. M., Gegenfurtner, A., and Lewalter, D. (2015). How situational is situational interest? Investigating the longitudinal structure of situational interest. Contemp. Educ. Psychol. 43, 39–50. doi: 10.1016/j.cedpsych.2015.08.004

    Koenen, J. (2014). Entwicklung und Evaluation von experimentunterstützten Lösungsbeispielen zur Förderung naturwissenschaftlich experimenteller Arbeitsweisen . Dissertation.

    Koenen, J., Emden, M., and Sumfleth, E. (2017). Naturwissenschaftlich-experimentelles Arbeiten. Potenziale des Lernens mit Lösungsbeispielen und Experimentierboxen. (scientific-experimental work. Potentials of learning with solution examples and experimentation boxes). Zeitschrift für Didaktik der Naturwissenschaften 23, 81–98. doi: 10.1007/s40573-017-0056-5

    Kohlhauf, L., Rutke, U., and Neuhaus, B. J. (2011). Influence of previous knowledge, language skills and domain-specific interest on observation competency. J. Sci. Educ. Technol. 20, 667–678. doi: 10.1007/s10956-011-9322-3

    Leppink, J., Paas, F., Van der Vleuten, C. P., Van Gog, T., and Van Merriënboer, J. J. (2013). Development of an instrument for measuring different types of cognitive load. Behav. Res. Methods 45, 1058–1072. doi: 10.3758/s13428-013-0334-1

    Lewalter, D. (2020). “Schülerlaborbesuche aus motivationaler Sicht unter besonderer Berücksichtigung des Interesses. (Student laboratory visits from a motivational perspective with special attention to interest)” in Handbuch Forschen im Schülerlabor – theoretische Grundlagen, empirische Forschungsmethoden und aktuelle Anwendungsgebiete . eds. K. Sommer, J. Wirth, and M. Vanderbeke (Münster: Waxmann-Verlag), 62–70.

    Lewalter, D., and Knogler, M. (2014). “A questionnaire to assess situational interest – theoretical considerations and findings” in Poster Presented at the 50th Annual Meeting of the American Educational Research Association (AERA) (Philadelphia, PA)

    Lunetta, V., Hofstein, A., and Clough, M. P. (2007). Learning and teaching in the school science laboratory: an analysis of research, theory, and practice. In N. Lederman and S. Abel (Eds.). Handbook of research on science education , Mahwah, NJ: Lawrence Erlbaum, 393–441.

    Mayer, R. E. (2001). Multimedia learning. Cambridge University Press.

    Paas, F., Renkl, A., and Sweller, J. (2003). Cognitive load theory and instructional design: recent developments. Educ. Psychol. 38, 1–4. doi: 10.1207/S15326985EP3801_1

    Paas, F., Tuovinen, J., van Merriënboer, J. J. G., and Darabi, A. (2005). A motivational perspective on the relation between mental effort and performance: optimizing learner involvement in instruction. Educ. Technol. Res. Dev. 53, 25–34. doi: 10.1007/BF02504795

    Reiss, K., Heinze, A., Renkl, A., and Groß, C. (2008). Reasoning and proof in geometry: effects of a learning environment based on heuristic worked-out examples. ZDM Int. J. Math. Educ. 40, 455–467. doi: 10.1007/s11858-008-0105-0

    Renkl, A. (2001). Explorative Analysen zur effektiven Nutzung von instruktionalen Erklärungen beim Lernen aus Lösungsbeispielen. (Exploratory analyses of the effective use of instructional explanations in learning from worked examples). Unterrichtswissenschaft 29, 41–63. doi: 10.25656/01:7677

    Renkl, A. (2014). “The worked examples principle in multimedia learning” in Cambridge handbook of multimedia learning . ed. R. E. Mayer (Cambridge University Press), 391–412.

    Renkl, A. (2017). Learning from worked-examples in mathematics: students relate procedures to principles. ZDM 49, 571–584. doi: 10.1007/s11858-017-0859-3

    Renkl, A., Atkinson, R. K., and Große, C. S. (2004). How fading worked solution steps works. A cognitive load perspective. Instr. Sci. 32, 59–82. doi: 10.1023/B:TRUC.0000021815.74806.f6

    Renkl, A., Atkinson, R. K., and Maier, U. H. (2000). “From studying examples to solving problems: fading worked-out solution steps helps learning” in Proceeding of the 22nd Annual Conference of the Cognitive Science Society . eds. L. Gleitman and A. K. Joshi (Mahwah, NJ: Erlbaum), 393–398.

    Renkl, A., Atkinson, R. K., Maier, U. H., and Staley, R. (2002). From example study to problem solving: smooth transitions help learning. J. Exp. Educ. 70, 293–315. doi: 10.1080/00220970209599510

    Renkl, A., Hilbert, T., and Schworm, S. (2009). Example-based learning in heuristic domains: a cognitive load theory account. Educ. Psychol. Rev. 21, 67–78. doi: 10.1007/s10648-008-9093-4

    Schworm, S., and Renkl, A. (2007). Learning argumentation skills through the use of prompts for self-explaining examples. J. Educ. Psychol. 99, 285–296. doi: 10.1037/0022-0663.99.2.285

    Sirum, K., and Humburg, J. (2011). The experimental design ability test (EDAT). Bioscene 37, 8–16.

    Staus, N. L., O’Connell, K., and Storksdieck, M. (2021). Addressing the ceiling effect when assessing STEM out-of-school time experiences. Front. Educ. 6:690431. doi: 10.3389/feduc.2021.690431

    Sweller, J. (2006). The worked example effect and human cognition. Learn. Instr. 16, 165–169. doi: 10.1016/j.learninstruc.2006.02.005

    Sweller, J., Van Merriënboer, J. J. G., and Paas, F. (1998). Cognitive architecture and instructional design. Educ. Psychol. Rev. 10, 251–295. doi: 10.1023/A:1022193728205

    Thomas, A. E., and Müller, F. H. (2011). “Skalen zur motivationalen Regulation beim Lernen von Schülerinnen und Schülern. Skalen zur akademischen Selbstregulation von Schüler/innen SRQ-A [G] (überarbeitete Fassung)” in Scales of motivational regulation in student learning. Student academic self-regulation scales SRQ-A [G] (revised version). Wissenschaftliche Beiträge aus dem Institut für Unterrichts- und Schulentwicklung Nr. 5 (Klagenfurt: Alpen-Adria-Universität)

    Um, E., Plass, J. L., Hayward, E. O., and Homer, B. D. (2012). Emotional design in multimedia learning. J. Educ. Psychol. 104, 485–498. doi: 10.1037/a0026609

    Van Gog, T., Kester, L., and Paas, F. (2011). Effects of worked examples, example-problem, and problem- example pairs on novices’ learning. Contemp. Educ. Psychol. 36, 212–218. doi: 10.1016/j.cedpsych.2010.10.004

    Van Gog, T., and Paas, G. W. C. (2006). Optimising worked example instruction: different ways to increase germane cognitive load. Learn. Instr. 16, 87–91. doi: 10.1016/j.learninstruc.2006.02.004

    Van Harsel, M., Hoogerheide, V., Verkoeijen, P., and van Gog, T. (2019). Effects of different sequences of examples and problems on motivation and learning. Contemp. Educ. Psychol. 58, 260–275. doi: 10.1002/acp.3649

    Wachsmuth, C. (2020). Computerbasiertes Lernen mit Aufmerksamkeitsdefizit: Unterstützung des selbstregulierten Lernens durch metakognitive prompts. (Computer-based learning with attention deficit: supporting self-regulated learning through metacognitive prompts) . Chemnitz: Dissertation Technische Universität Chemnitz.

    Wahser, I. (2008). Training von naturwissenschaftlichen Arbeitsweisen zur Unterstützung experimenteller Kleingruppenarbeit im Fach Chemie (Training of scientific working methods to support experimental small group work in chemistry) . Dissertation

    Walker, J., Gibson, J., and Brown, D. (2007). Selecting fluvial geomorphological methods for river management including catchment scale restoration within the environment agency of England and Wales. Int. J. River Basin Manag. 5, 131–141. doi: 10.1080/15715124.2007.9635313

    Wellnitz, N., and Mayer, J. (2013). Erkenntnismethoden in der Biologie – Entwicklung und evaluation eines Kompetenzmodells. (Methods of knowledge in biology - development and evaluation of a competence model). Z. Didaktik Naturwissensch. 19, 315–345.

    Willems, A. S., and Lewalter, D. (2011). “Welche Rolle spielt das motivationsrelevante Erleben von Schülern für ihr situationales Interesse im Mathematikunterricht? (What role does students’ motivational experience play in their situational interest in mathematics classrooms?). Befunde aus der SIGMA-Studie” in Erziehungswissenschaftliche Forschung – nachhaltige Bildung. Beiträge zur 5. DGfE-Sektionstagung “Empirische Bildungsforschung”/AEPF-KBBB im Frühjahr 2009 . eds. B. Schwarz, P. Nenninger, and R. S. Jäger (Landau: Verlag Empirische Pädagogik), 288–294.

    Keywords: digital media, worked examples, scientific observation, motivation, cognitive load

    Citation: Lechner M, Moser S, Pander J, Geist J and Lewalter D (2024) Learning scientific observation with worked examples in a digital learning environment. Front. Educ . 9:1293516. doi: 10.3389/feduc.2024.1293516

    Received: 13 September 2023; Accepted: 29 February 2024; Published: 18 March 2024.

    Reviewed by:

    Copyright © 2024 Lechner, Moser, Pander, Geist and Lewalter. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

    *Correspondence: Miriam Lechner, [email protected]

    Library homepage

    • school Campus Bookshelves
    • menu_book Bookshelves
    • perm_media Learning Objects
    • login Login
    • how_to_reg Request Instructor Account
    • hub Instructor Commons
    • Download Page (PDF)
    • Download Full Book (PDF)
    • Periodic Table
    • Physics Constants
    • Scientific Calculator
    • Reference & Cite
    • Tools expand_more
    • Readability

    selected template will load here

    This action is not available.

    Medicine LibreTexts

    1.9: Types of Research Studies and How To Interpret Them

    • Last updated
    • Save as PDF
    • Page ID 49296

    • Alice Callahan, Heather Leonard, & Tamberly Powell
    • Lane Community College via OpenOregon

    The field of nutrition is dynamic, and our understanding and practices are always evolving. Nutrition scientists are continuously conducting new research and publishing their findings in peer-reviewed journals. This adds to scientific knowledge, but it’s also of great interest to the public, so nutrition research often shows up in the news and other media sources. You might be interested in nutrition research to inform your own eating habits, or if you work in a health profession, so that you can give evidence-based advice to others. Making sense of science requires that you understand the types of research studies used and their limitations.

    The Hierarchy of Nutrition Evidence

    Researchers use many different types of study designs depending on the question they are trying to answer, as well as factors such as time, funding, and ethical considerations. The study design affects how we interpret the results and the strength of the evidence as it relates to real-life nutrition decisions. It can be helpful to think about the types of studies within a pyramid representing a hierarchy of evidence, where  the  studies at the bottom of the pyramid usually give us the weakest evidence with the least relevance to real-life nutrition decisions, and the studies at the top offer the strongest evidence, with the most relevance to real-life nutrition  decisions .

    types of research studies in education

    The pyramid also represents a few other general ideas. There tend to be more studies published using the methods at the bottom of the pyramid, because they require less time, money, and other resources. When researchers want to test a new hypothesis , they often start with the study designs at the bottom of the pyramid , such as in vitro, animal, or observational studies. Intervention studies are more expensive and resource-intensive, so there are fewer of these types of studies conducted. But they also give us higher quality evidence, so they’re an important next step if observational and non-human studies have shown promising results. Meta-analyses and systematic reviews combine the results of many studies already conducted, so they help researchers summarize scientific knowledge on a topic.

    Non-Human Studies: In Vitro & Animal Studies

    The simplest form of nutrition research is an in vitro study . In vitro means “within glass,” (although plastic is used more commonly today) and these experiments are conducted within flasks, dishes, plates, and test tubes. These studies are performed on isolated cells or tissue samples, so they’re less expensive and time-intensive than animal or human studies. In vitro studies are vital for zooming in on biological mechanisms, to see how things work at the cellular or molecular level. However, these studies shouldn’t be used to draw conclusions about how things work in humans (or even animals), because we can’t assume that the results will apply to a whole, living organism.

    Two photos representing lab research. At left, a person appearing to be a woman with long dark hair and dark skin handles tiny tubes in a black bucket of ice. More tubes surround the bucket on the table. At right, a white mouse with red eyes peers out of an opening of a cage.

    Animal studies are one form of in vivo research, which translates to “within the living.” Rats and mice are the most common animals used in nutrition research. Animals are often used in research that would be unethical to conduct in humans. Another advantage of animal dietary studies is that researchers can control exactly what the animals eat. In human studies, researchers can tell subjects what to eat and even provide them with the food, but they may not stick to the planned diet. People are also not very good at estimating, recording, or reporting what they eat and in what quantities. In addition, animal studies typically do not cost as much as human studies.

    There are some important limitations of animal research. First, an animal’s metabolism and physiology are different from humans. Plus, animal models of disease (cancer, cardiovascular disease, etc.), although similar, are different from human diseases. Animal research is considered preliminary, and while it can be very important to the process of building scientific understanding and informing the types of studies that should be conducted in humans, animal studies shouldn’t be considered relevant to real-life decisions about how people eat.

    Observational Studies

    Observational studies  in human nutrition collect information on people’s dietary patterns or nutrient intake and look for associations with health outcomes. Observational studies do not give participants a treatment or intervention; instead, they look at what they’re already doing and see how it relates to their health. These types of study designs can only identify  correlations  (relationships) between nutrition and health; they can’t show that one factor  causes  another. (For that, we need intervention studies, which we’ll discuss in a moment.) Observational studies that describe factors correlated with human health are also called  epidemiological studies . 1

    One example of a nutrition hypothesis that has been investigated using observational studies is that eating a Mediterranean diet reduces the risk of developing cardiovascular disease. (A Mediterranean diet focuses on whole grains, fruits and vegetables, beans and other legumes, nuts, olive oil, herbs, and spices. It includes small amounts of animal protein (mostly fish), dairy, and red wine. 2 ) There are three main types of observational studies, all of which could be used to test hypotheses about the Mediterranean diet:

    • Cohort studies follow a group of people (a cohort) over time, measuring factors such as diet and health outcomes. A cohort study of the Mediterranean diet would ask a group of people to describe their diet, and then researchers would track them over time to see if those eating a Mediterranean diet had a lower incidence of cardiovascular disease.
    • Case-control studies compare a group of cases and controls, looking for differences between the two groups that might explain their different health outcomes. For example, researchers might compare a group of people with cardiovascular disease with a group of healthy controls to see whether there were more controls or cases that followed a Mediterranean diet.
    • Cross-sectional studies collect information about a population of people at one point in time. For example, a cross-sectional study might compare the dietary patterns of people from different countries to see if diet correlates with the prevalence of cardiovascular disease in the different countries.

    Prospective cohort studies, which enroll a cohort and follow them into the future, are usually considered the strongest type of observational study design. Retrospective studies look at what happened in the past, and they’re considered weaker because they rely on people’s memory of what they ate or how they felt in the past. There are several well-known examples of prospective cohort studies that have described important correlations between diet and disease:

    • Framingham Heart Study : Beginning in 1948, this study has followed the residents of Framingham, Massachusetts to identify risk factors for heart disease.
    • Health Professionals Follow-Up Study : This study started in 1986 and enrolled 51,529 male health professionals (dentists, pharmacists, optometrists, osteopathic physicians, podiatrists, and veterinarians), who complete diet questionnaires every 2 years.
    • Nurses Health Studies : Beginning in 1976, these studies have enrolled three large cohorts of nurses with a total of 280,000 participants. Participants have completed detailed questionnaires about diet, other lifestyle factors (smoking and exercise, for example), and health outcomes.

    Observational studies have the advantage of allowing researchers to study large groups of people in the real world, looking at the frequency and pattern of health outcomes and identifying factors that correlate with them. But even very large observational studies may not apply to the population as a whole. For example, the Health Professionals Follow-Up Study and the Nurses Health Studies include people with above-average knowledge of health. In many ways, this makes them ideal study subjects, because they may be more motivated to be part of the study and to fill out detailed questionnaires for years. However, the findings of these studies may not apply to people with less baseline knowledge of health.

    We’ve already mentioned another important limitation of observational studies—that they can only determine correlation, not causation. A prospective cohort study that finds that people eating a Mediterranean diet have a lower incidence of heart disease can only show that the Mediterranean diet is correlated with lowered risk of heart disease. It can’t show that the Mediterranean diet directly prevents heart disease. Why? There are a huge number of factors that determine health outcomes such as heart disease, and other factors might explain a correlation found in an observational study. For example, people who eat a Mediterranean diet might also be the same kind of people who exercise more, sleep more, have higher income (fish and nuts can be expensive!), or be less stressed. These are called confounding factors ; they’re factors that can affect the outcome in question (i.e., heart disease) and also vary with the factor being studied (i.e., Mediterranean diet).

    Intervention Studies

    Intervention studies , also sometimes called experimental studies or clinical trials, include some type of treatment or change imposed by the researcher. Examples of interventions in nutrition research include asking participants to change their diet, take a supplement, or change the time of day that they eat. Unlike observational studies, intervention studies can provide evidence of cause and effect , so they are higher in the hierarchy of evidence pyramid.

    The gold standard for intervention studies is the randomized controlled trial (RCT) . In an RCT, study subjects are recruited to participate in the study. They are then randomly assigned into one of at least two groups, one of which is a control group (this is what makes the study controlled ). In an RCT to study the effects of the Mediterranean diet on cardiovascular disease development, researchers might ask the control group to follow a low-fat diet (typically recommended for heart disease prevention) and the intervention group to eat a Mediterrean diet. The study would continue for a defined period of time (usually years to study an outcome like heart disease), at which point the researchers would analyze their data to see if more people in the control or Mediterranean diet had heart attacks or strokes. Because the treatment and control groups were randomly assigned, they should be alike in every other way except for diet, so differences in heart disease could be attributed to the diet. This eliminates the problem of confounding factors found in observational research, and it’s why RCTs can provide evidence of causation, not just correlation.

    Imagine for a moment what would happen if the two groups weren’t randomly assigned. What if the researchers let study participants choose which diet they’d like to adopt for the study? They might, for whatever reason, end up with more overweight people who smoke and have high blood pressure in the low-fat diet group, and more people who exercised regularly and had already been eating lots of olive oil and nuts for years in the Mediterranean diet group. If they found that the Mediterranean diet group had fewer heart attacks by the end of the study, they would have no way of knowing if this was because of the diet or because of the underlying differences in the groups. In other words, without randomization, their results would be compromised by confounding factors, with many of the same limitations as observational studies.

    In an RCT of a supplement, the control group would receive a placebo—a  “fake” treatment that contains no active ingredients, such as a sugar pill. The use of a placebo is necessary in medical research because of a phenomenon known as the placebo effect. The placebo effect results in a beneficial effect because of a subject’s belief in the treatment, even though there is no treatment actually being administered.

    A cartoon depicts the study described in the text. At left is shown the "super duper sports drink" (sports drink plus food coloring) in orange. At right is the regular sports drink in green. A cartoon guy with yellow hair is pictured sprinting. The time with the super duper sports drink is 10.50 seconds, and the time with the regular sports drink is 11.00 seconds. The image reads "the improvement is the placebo effect."

    Blinding is a technique to prevent bias in intervention studies. In a study without blinding, the subject and the researchers both know what treatment the subject is receiving. This can lead to bias if the subject or researcher have expectations about the treatment working, so these types of trials are used less frequently. It’s best if a study is double-blind , meaning that neither the researcher nor the subject know what treatment the subject is receiving. It’s relatively simple to double-blind a study where subjects are receiving a placebo or treatment pill, because they could be formulated to look and taste the same. In a single-blind study , either the researcher or the subject knows what treatment they’re receiving, but not both. Studies of diets—such as the Mediterranean diet example—often can’t be double-blinded because the study subjects know whether or not they’re eating a lot of olive oil and nuts. However, the researchers who are checking participants’ blood pressure or evaluating their medical records could be blinded to their treatment group, reducing the chance of bias.

    Like all studies, RCTs and other intervention studies do have some limitations. They can be difficult to carry on for long periods of time and require that participants remain compliant with the intervention. They’re also costly and often have smaller sample sizes. Furthermore, it is unethical to study certain interventions. (An example of an unethical intervention would be to advise one group of pregnant mothers to drink alcohol to determine its effects on pregnancy outcomes, because we know that alcohol consumption during pregnancy damages the developing fetus.)

    VIDEO: “ Not all scientific studies are created equal ” by David H. Schwartz, YouTube (April 28, 2014), 4:26.

    Meta-Analyses and Systematic Reviews

    At the top of the hierarchy of evidence pyramid are systematic reviews and meta-analyses .  You can think of these as “studies of studies.” They attempt to combine all of the relevant studies that have been conducted on a research question and summarize their overall conclusions. Researchers conducting a  systematic review  formulate a research question and then systematically and independently identify, select, evaluate, and synthesize all high-quality evidence that relates to the research question. Since systematic reviews combine the results of many studies, they help researchers produce more reliable findings. A  meta-analysis  is a type of systematic review that goes one step further, combining the data from multiple studies and using statistics to summarize it, as if creating a mega-study from many smaller studies . 4

    However, even systematic reviews and meta-analyses aren’t the final word on scientific questions. For one thing, they’re only as good as the studies that they include. The  Cochrane Collaboration  is an international consortium of researchers who conduct systematic reviews in order to inform evidence-based healthcare, including nutrition, and their reviews are among the most well-regarded and rigorous in science. For the most recent Cochrane review of the Mediterranean diet and cardiovascular disease, two authors independently reviewed studies published on this question. Based on their inclusion criteria, 30 RCTs with a total of 12,461 participants were included in the final analysis. However, after evaluating and combining the data, the authors concluded that “despite the large number of included trials, there is still uncertainty regarding the effects of a Mediterranean‐style diet on cardiovascular disease occurrence and risk factors in people both with and without cardiovascular disease already.” Part of the reason for this uncertainty is that different trials found different results, and the quality of the studies was low to moderate. Some had problems with their randomization procedures, for example, and others were judged to have unreliable data. That doesn’t make them useless, but it adds to the uncertainty about this question, and uncertainty pushes the field forward towards more and better studies. The Cochrane review authors noted that they found seven ongoing trials of the Mediterranean diet, so we can hope that they’ll add more clarity to this question in the future. 5

    Science is an ongoing process. It’s often a slow process, and it contains a lot of uncertainty, but it’s our best method of building knowledge of how the world and human life works. Many different types of studies can contribute to scientific knowledge. None are perfect—all have limitations—and a single study is never the final word on a scientific question. Part of what advances science is that researchers are constantly checking each other’s work, asking how it can be improved and what new questions it raises.

    Attributions:

    • “Chapter 1: The Basics” from Lindshield, B. L. Kansas State University Human Nutrition (FNDH 400) Flexbook. goo.gl/vOAnR , CC BY-NC-SA 4.0
    • “ The Broad Role of Nutritional Science ,” section 1.3 from the book An Introduction to Nutrition (v. 1.0), CC BY-NC-SA 3.0

    References:

    • 1 Thiese, M. S. (2014). Observational and interventional study design types; an overview. Biochemia Medica , 24 (2), 199–210. https://doi.org/10.11613/BM.2014.022
    • 2 Harvard T.H. Chan School of Public Health. (2018, January 16). Diet Review: Mediterranean Diet . The Nutrition Source. https://www.hsph.harvard.edu/nutritionsource/healthy-weight/diet-reviews/mediterranean-diet/
    • 3 Ross, R., Gray, C. M., & Gill, J. M. R. (2015). Effects of an Injected Placebo on Endurance Running Performance. Medicine and Science in Sports and Exercise , 47 (8), 1672–1681. https://doi.org/10.1249/MSS.0000000000000584
    • 4 Hooper, A. (n.d.). LibGuides: Systematic Review Resources: Systematic Reviews vs Other Types of Reviews . Retrieved February 7, 2020, from //libguides.sph.uth.tmc.edu/c.php?g=543382&p=5370369
    • 5 Rees, K., Takeda, A., Martin, N., Ellis, L., Wijesekara, D., Vepa, A., Das, A., Hartley, L., & Stranges, S. (2019). Mediterranean‐style diet for the primary and secondary prevention of cardiovascular disease. Cochrane Database of Systematic Reviews , 3 . doi.org/10.1002/14651858.CD009825.pub3
    • Figure 2.3. The hierarchy of evidence by Alice Callahan, is licensed under CC BY 4.0
    • Research lab photo by National Cancer Institute on Unsplas h ; mouse photo by vaun0815 on Unsplash
    • Figure 2.4. “Placebo effect example” by Lindshield, B. L. Kansas State University Human Nutrition (FNDH 400) Flexbook. goo.gl/vOAnR
    • Our Mission

    Illustration of hands doodling in notebook

    5 Popular Education Beliefs That Aren’t Backed by Research

    Making adjustments to these common misconceptions can turn dubious strategies into productive lessons, the research suggests.

    Not every learning myth requires teachers to pull up stakes and start all over again—at least not entirely. There are some commonly-held misconceptions that contain a nugget of wisdom but need to be tweaked in order to align with the science of learning.

    Sometimes, in other words, you’re already halfway there. Here are five mostly-myths, from the power of doodling to the motivating role of grades, that educators can quickly adjust and turn to their advantage.

    1. Doodling Improves Focus and Learning

    When we write about the power of drawing to learn, we often hear from readers who feel compelled to defend an old habit: “See, I told you that when I was doodling I was still paying attention!” But doodling—which is commonly defined as “an aimless or casual scribble or sketch” and often consists of marginalia like cartoon characters, geometric patterns, or pastoral scenes—is distinct from what researchers call “task-related drawing.” And doodling, in this sense, is not associated with improvements in focus or academic outcomes.

    In fact, both cognitive load theory and experimental studies are generally downbeat on doodling. Students who sketch complicated scenes or designs as they try to process a lesson on plate tectonics, according to the first theory, are engaging in competitive cognitive tasks and will generally underperform on both. Doodling, like all drawing, is cognitively intensive, involving complex feedback loops between visual, sensorimotor, attentional, and planning regions of the brain and body. Because our ability to process information is finite, drawing and learning about different things at the same time is a simple question of too much.

    Research confirms the theory. A 2019 study pitted off-task doodling against typical learning activities like “task-related drawing” and writing. In three separate but related experiments, task-related drawing and writing beat out doodling in terms of recall—by margins as large as 300%. How to fix it: Sketching what you are actually learning—from representational drawings of cells or tectonic boundaries to the creation of concept maps and organizational drawings—is, in fact, a powerful learning strategy (see research here , here , and here ), and that applies “regardless of one’s artistic talent,” a 2018 study confirms. Try to harness a student’s passion for doodling by allowing them to submit academic sketches as work products. To get even more bang for your buck, ask them to annotate their drawings, or talk you through them—which will encode learning even more deeply, according to research .

    2. Reading Aloud In Turn Improves Fluency

    Often called “round robin reading” (RRR), I resorted to it when I taught years ago—and it appears that it’s still frequently used, judging from a 2019 blog by literacy expert Timothy Shanahan, and comments on a 2022 Cult of Pedagogy post on the topic. Teachers deploy it for good reasons: Arguably, the practice encourages student engagement; gives teachers the opportunity to gauge oral reading fluency; and has a built-in classroom management benefit as well. Students are generally silent and (superficially) attentive when a peer is reading.

    But according to Shanahan, and the literacy professors Katherine Hilden and Jennifer Jones, the practice has long been frowned upon. In an influential 2012 review of relevant literature , Hilden and Jones cut straight to the point: “We know of no research evidence that supports the claim that RRR actually contributes to students becoming better readers, either in terms of their fluency or comprehension.”

    In fact, RRR has plenty of problems. Individual students using RRR may accumulate less than three hours of oral reading time over the course of a year, according to Shanahan—and Hilden and Jones say that students who are following along during the activity tend to “subvocalize” as they track the reader, reducing their own internal reading speed unnecessarily. RRR also has the unfortunate effects of stigmatizing struggling readers; exposing new readers to dysfluent modeling; and failing to incorporate meaningful comprehension strategies. 

    How to fix it: Yes, reading out loud is necessary to teach fluency, according to Shanahan, but there are better methods. Pairing kids together to “read sections of the text aloud to each other (partner reading) and then discuss and answer your questions,” is a good approach, he says, especially if teachers circulate to listen for problems. 

    More generally, reading strategies that model proper reading speed, pronunciation, and affect—while providing time for vocabulary review, repeated exposure to the text, and opportunities to summarize and discuss—can improve both fluency and comprehension.

    A 2011 study , for example, demonstrated that combining choral reading—teachers and students read a text in unison (similar to echo reading )—with other activities like vocabulary review, teacher modeling, and follow-up discussion improved students’ decoding and fluency.

    3. Talent Beats Persistence

    It’s a common trap: Observers tend to rate people who appear to be naturally gifted at something more highly than those who admit they’ve worked hard to achieve success. Researchers call this the “ naturalness bias ,” and it shows up everywhere, from teachers evaluating students to bosses evaluating employees.

    In reality, the opposite is more often true. “Popular lore tells us that genius is born, not made,” writes psychologist and widely-cited researcher of human potential K. Anders Ericsson for the Harvard Business Review . “Scientific research, on the other hand, reveals that true expertise is mainly the product of years of intense practice and dedicated coaching.”

    Experimental studies extend the point to academics: An influential 2019 study led by psychologists Brian Galla and Angela Duckworth, for example, found that high school GPA is a better predictor than the SAT of how likely students are to complete college on time. That's because "grades are a very good index of your self-regulation—your ability to stick with things, your ability to regulate your impulses, your ability to delay gratification and work hard instead of goofing off," said Duckworth in a 2020 interview with Edutopia .

    How to fix it: All kids—even the ones who already excel in a discipline—benefit when teachers emphasize the importance of effort, perseverance, and growth. Consider praising students for their improvement instead of their raw scores; have students read about and then discuss the idea of neural plasticity; and consider assigning reports on the mistakes and growing pains of accomplished writers, scientists, and artists.

    Try to incorporate rough-draft thinking in class and think about taking risks yourself: The renowned writing teacher Kelly Gallagher, author of Readicide , regularly composes in front of his class to model his own tolerance for errors and redrafting.

    4. Background Music (Always) Undermines Learning

    It’s a fascinating and complex question: Can students successfully learn while background music is playing?

    In some cases, it appears, background music can be a neutral to positive influence; in other scenarios, it’s clearly distracting. There are several factors at play in determining the outcomes.

    A 2021 study clarifies that because music and language use some of the same neural circuitry—a finding that appears as early as infancy —“listening to lyrics of a familiar language may rely on the same cognitive resources as vocabulary learning,” and that can “lead to an overload of processing capacity and thus to an interference effect.” Other features of the music probably matter, too: Dramatic changes in a song’s rhythm, for example, or transitions from one song to the next , often force the learning brain to reckon with irrelevant information. A 2018 research review confirms the general finding: Across 65 studies, background music consistently had a “small but reliably detrimental effect” on reading comprehension.

    In some cases, however, music may aid learning. Neuroscience suggests that catchy melodies, for example, can boost a student’s mood—which might lead to significant positive effects on learning when motivation and concentration are paramount, a 2023 study found.

    How to fix it: Basically, “music has two effects simultaneously that conflict with one another,” cognitive psychologist Daniel Willingham told Edutopia in 2023 —one distracting, and the other arousing.

    “If you’re doing work that’s not very demanding, having music on is probably fine”—and likely to motivate students to keep going, Willingham says . In those cases, try to stick to music that’s instrumental or familiar, in order to decrease the cognitive resources needed to process it. “But if you’re doing work that’s just somewhat difficult, the distraction is probably going to make music a negative overall,” Willingham adds.

    5. Grades Are Motivating

    Teachers are well aware that grading, as a system, has many flaws—but at least grades motivate students to try their hardest, right? Unfortunately, the research suggests that that’s largely not the case.

    “Despite the conventional wisdom in education, grades don’t motivate students to do their best work, nor do they lead to better learning or performance,” write motivation researcher Chris Hulleman and science teacher Ian Kelleher in an article for Edutopia . A 2019 research review , meanwhile, revealed that when confronted by grades, written feedback, or nothing at all—students preferred the latter two to grades, suggesting that A-F rankings might actually have a net negative impact on motivation.

    In another blow to grades, a 2018 analysis of university policies like pass/fail grading or narrative evaluations concluded that “grades enhanced anxiety and avoidance of challenging courses” but didn’t improve student motivation. Providing students with specific, actionable feedback, on the other hand, “promot[ed] trust between instructors and students,” leading to greater academic ambition.

    How to fix it: While grades are still mandatory in most schools—and some form of rigorous assessment remains an imperative—educators might consider ways to de-emphasize them.

    Some teachers choose to drop every student’s lowest grade , for example, allow students to retake a limited number of assessments each unit, or periodically give kids the discretion to turn in “their best work” from a series of related assignments. 

    At King Middle School in Portland, Maine , educators delay their release of grades until the end of the unit, an approach backed by a 2021 study which found that delayed grading—handing back personalized feedback days before releasing number or letter grades—can boost student performance on future assignments by two-thirds of a letter grade.

    7-Week SSP & 2-Week Pre-College Program are still accepting applications until April 10, or earlier if all course waitlists are full. 4-Week SSP Application is closed.

    Celebrating 150 years of Harvard Summer School. Learn about our history.

    8 Time Management Tips for Students

    Don't let a hectic schedule get the better of you with these time management tips.

    Lian Parsons

    College can be a stressful time for many students and time management can be one of the most crucial — but tricky — skills to master.

    Attending classes, studying for exams, making friends, and taking time to relax and decompress can quickly fill up your schedule. If you often find yourself wishing there were more hours in the day, this guide will offer time management tips for students so you can accomplish what you need to get done, have fun with your friends, and gain back some valuable time for yourself. 

    1. Create a Calendar

    Don’t be caught by surprise by an important paper due two days from now or a dinner with your family the same night you planned for a group study session. Create a calendar for yourself with all your upcoming deadlines, exams, social events, and other time commitments well in advance so you can see what’s coming up. 

    Keep your calendar in a place where you can see it every day, such as in your planner or on your wall above your desk. If you prefer a digital calendar, check it first thing every day to keep those important events fresh and top-of-mind. For greater efficiency, make sure you can integrate it with your other tools, such as your email.

    Digital calendar options include: 

    • Google Calendar 
    • Outlook Calendar
    • Fantastical

    2. Set Reminders

    After you’ve created your calendar, give yourself periodic reminders to stay on track such as to complete a study guide in advance or schedule a meeting for a group project. Knowing deadlines is important; however, staying on top of the micro tasks involved in meeting those deadlines is just as important. You can set an alarm on your phone, write it down in a physical planner, or add an alert to your digital calendar. The reminders will help to prevent things from slipping through the cracks during particularly hectic days.

    Make sure you’ve allotted enough time to study for that big test or write that final paper. Time management is all about setting yourself up for success in advance and giving yourself the tools to accomplish tasks with confidence. 

    Read our blogs, Your Guide to Conquering College Coursework and Top 10 Study Tips to Study Like a Harvard Student , for more suggestions.

    3. Build a Personalized Schedule

    Each person’s day-to-day is different and unique to them, so make sure your schedule works for you. Once you’ve accounted for consistent commitments such as classes or your shifts at work, add in study sessions, extracurriculars, chores and errands, and social engagements.

    Consider your personal rhythm. If you typically start your day energized, plan to study or accomplish chores then. If you fall into an afternoon slump, give yourself that time to take a guilt-free TV break or see friends.

    Having a schedule that works for you will help maximize your time. Plus, knowing exactly when your laundry day is or when your intramural volleyball practice is every week will help you avoid trying to cram everything in one day (or running out of clean socks!)

    Explore summer college courses.

    4. Use Tools That Work For You

    Just like your calendar and schedule, the tools you use to keep you organized should be the right fit for you. Some students prefer physical planners and paper, while some prefer going totally digital. Your calendar can help you with long-term planning, but most of these tools are best for prioritizing from day to day.

    Explore what best suits your needs with some of the following suggestions:

    Planners can help you keep track of long-term deadlines, such as important essay deadlines, upcoming exams, and appointments and meetings. They often provide a monthly overview each month, as well as day-to-day planning sections, so you can stay ahead. 

    • Papier – Offers a 20% student discount 

    If your schedule is jam-packed and you have trouble figuring out what to do and when, scheduling day by day—and sometimes even hour by hour—can help you slot in everything you need to do with less stress.

    • Structured app

    Note Taking

    From class to study sessions to errands, keeping track of everything can feel overwhelming. Keeping everything in one place, whether on the go or at your desk, can help keep you organized.

    • Bullet journals

    5. Prioritize

    Sometimes there really is too much to do with too little time. In these instances, take just a few minutes to evaluate your priorities. Consider which deadlines are most urgent, as well as how much energy you have. 

    If you are able to complete simple tasks first, try getting them out of the way before moving on to tasks that require a lot of focus. This can help to alleviate some of the pressure by checking a couple things off your to-do list without getting bogged down too early.

    If you are struggling to fit everything in your schedule, consider what you can postpone or what you can simply say no to. Your friends will likely understand if you have to meet them for coffee another time in order to get in a final library session before a challenging exam. 

    6. Make Time to Have Fun — And For Yourself

    Time management isn’t just about getting work done. It’s also about ensuring that you can put yourself and your mental wellbeing first. Consistently including time for yourself in your schedule helps to keep your mental health and your life in balance. It can also be helpful to have things to look forward to when going through stressful periods.  

    Whether it’s going for a bike ride along the river, spending time with your friends and family, or simply sleeping in on a Sunday, knowing you have space to relax and do things you enjoy can provide better peace of mind. 

    7. Find Support 

    Preparation and organization can sometimes only get you so far. Luckily, you have plenty of people rooting for your success. Keep yourself and your classmates on task by finding an accountability partner or study buddies. Remind your roommates when you need extra space to work on a paper. 

    Your school’s academic resource center is also there to support you and point you in the right direction if you need additional help. Getting—and staying—organized is a collaborative effort and no one can do it on their own. 

    8. Be Realistic and Flexible 

    Sometimes unforeseen circumstances will come up or you simply may not be able to get to everything you set out to do in a given day. Be patient with yourself when things don’t go exactly to plan. When building your calendar, schedule, and priorities list, be realistic about what you can accomplish and include buffer time if you’re unsure. This can help to reduce obstacles and potential friction.

    Time management isn’t just about sticking to a rigid schedule—it’s also about giving yourself space for change.

    Learn more about our summer programs.

    About the Author

    Lian Parsons is a Boston-based writer and journalist. She is currently a digital content producer at Harvard’s Division of Continuing Education. Her bylines can be found at the Harvard Gazette, Boston Art Review, Radcliffe Magazine, Experience Magazine, and iPondr.

    Managing Stress in High School

    Our reasons may vary, but everyone experiences stress. Here are some of the common reasons high school students feel stressed, and what they can do about it.

    Harvard Division of Continuing Education

    The Division of Continuing Education (DCE) at Harvard University is dedicated to bringing rigorous academics and innovative teaching capabilities to those seeking to improve their lives through education. We make Harvard education accessible to lifelong learners from high school to retirement.

    Harvard Division of Continuing Education Logo

    • Open access
    • Published: 29 March 2024

    Access to continuous professional development for capacity building among nurses and midwives providing emergency obstetric and neonatal care in Rwanda

    • Mathias Gakwerere 1 ,
    • Jean Pierre Ndayisenga 2 , 3 ,
    • Anaclet Ngabonzima 4 ,
    • Thiery Claudien Uhawenimana 2 ,
    • Assumpta Yamuragiye 5 ,
    • Florien Harindimana 6 &
    • Bernard Ngabo Rwabufigiri 7  

    BMC Health Services Research volume  24 , Article number:  394 ( 2024 ) Cite this article

    44 Accesses

    1 Altmetric

    Metrics details

    Nurses and midwives are at the forefront of the provision of Emergency Obstetric and Neonatal Care (EmONC) and Continuous Professional Development (CPD) is crucial to provide them with competencies they need to provide quality services. This research aimed to assess uptake and accessibility of midwives and nurses to CPD and determine their knowledge and skills gaps in key competencies of EmONC to inform the CPD programming.

    The study applied a quantitative, cross-sectional, and descriptive research methodology. Using a random selection, forty (40) health facilities (HFs) were selected out of 445 HFs that performed at least 20 deliveries per month from July 1st, 2020 to June 30th, 2021 in Rwanda. Questionnaires were used to collect data on updates of CPD, knowledge on EmONC and delivery methods to accessCPD. Data was analyzed using IBM SPSS statistics 27 software.

    Nurses and midwives are required by the Rwandan midwifery regulatory body to complete at least 60 CPD credits before license renewal. However, the study findings revealed that most health care providers (HCPs) have not been trained on EmONC after graduation from their formal education. Results indicated that HCPs who had acquired less than 60 CPD credits related to EmONC training were 79.9% overall, 56.3% in hospitals, 82.2% at health centres and 100% at the health post levels. This resulted in skills and knowledge gaps in management of Pre/Eclampsia, Postpartum Hemorrhage and essential newborn care. The most common method to access CPD credits included workshops (43.6%) and online training (34.5%). Majority of HCPs noted that it was difficult to achieve the required CPD credits (57.0%).

    The findings from this study revealed a low uptake of critical EmONC training by nurses and midwives in the form of CPD. The study suggests a need to integrate EmONC into the health workforce capacity building plan at all levels and to make such training systematic and available in multiple and easily accessible formats.

    Implication on nursing and midwifery policy

    Findings will inform the revision of policies and strategies to improve CPD towards accelerating capacity for the reduction of preventable maternal and perinatal deaths as well as reducing maternal disabilities in Rwanda.

    Peer Review reports

    Continuous professional development (CPD) for capacity building is critical for health workers for enabling them to continually update their competencies to ensure they are able to meet client needs and adapt to constant changes in the practice environment [ 1 ]. CPD has been defined as any kind of education that professionals receive after completing their basic education of professional entry [ 2 ]. It is highly recommended that professionals in practice keep regularly updated about new protocols and refresh their knowledge for better continuity of care [ 3 ]. In emergency obstetric and neonatal care (EmONC), CPD is necessary to allow health care providers (HCPs) to feel confident and ready to deliver quality maternal and newborn health services [ 4 ]. Nurses and midwives, as essential providers of EmONC services, need to update their knowledge to enhance the quality of EmONC service delivery. EmONC encompasses all care provided to address emergent complications that occur during pregnancy, labor, and childbirth [ 4 ].

    Globally, the maternal mortality ratio (MMR) is still unacceptably high; on average every day approximately 810 women lose their lives while giving birth [ 5 ]. Most of these deaths could be prevented if quality EmONC is provided by skilled health care professionals [ 5 , 6 ]. The leading causes of maternal deaths include hemorrhage, hypertensive disorders, especially pre-eclampsia/eclampsia, sepsis, embolism and complications of unsafe abortions. Sub-Saharan Africa remains the region with the highest burden of maternal mortality and morbidities with an MMR of 542 per 100,000 live births, which is higher than the ratio of 216 per 100,000 live births globally [ 5 ]. According to the Sustainable Development Goal (SDG) 3.1, every country is expected to reduce the MMR to less than 70 per 100,000 live births and no country should have more than 140 maternal deaths per 100,000 live births by 2030 [ 7 , 8 ]. To achieve this target, evidence-based high impact interventions should be implemented at scale and with the highest quality. The WHO has developed policies and guidelines for antenatal, intrapartum, and postpartum care, but the implementation of these guidelines and policies could be difficult if HCPs are not updated through CPDs [ 9 ].

    In Rwanda, several health system-wide interventions such as community-based health programs, performance-based financing, community health insurance, and mentorship initiatives have been implemented to reduce the Maternal Mortality Ratio (MMR) [ 6 ]. Currently, the MMR in Rwanda is 203 per 100,000 live births [ 10 ]. However, despite the progressive improvement in reducing the maternal mortality ratio, there is still a need to triple efforts if the country is to achieve the SDG goal 3 target 1. The country is putting increased effort into training health care professionals in EmONC. Continuous Professional Development was formally introduced in 2013 with the adoption of the National CPD policy and since then EmONC has been one of main focus areas. Health Professional bodies were mandated to assess, validate and regulate the provision of CPD across the countries and every health provider is required to earn 60 CPD credits annually before renewing his/her professional license. This strategy enabled the Ministry of Health to keep health professionals’ knowledge and skills up to date hence improving the quality of maternal health services [ 11 ]. However, these CPD trainings were provided randomly without prior assessment of training needs of nurses and midwives in EmONC and maternity care.

    Accordingly, the aim of this study is to assess uptake, accessibility to CPDs and knowledge and skills gaps on EmONC among midwives and nurses in Rwanda. Specifically, this study assessed the basic knowledge of these health cadres in EmONC, the areas in which nurses feel confident, how often nurses and midwives receive training, and finally, the ways they prefer to get the CPDs to renew their license to practice. Furthermore, since the country is progressing toward using technologies across the health system, the study assessed the familiarity of midwives and nurses to use Information Technology (IT) for learning purposes. The findings could provide evidence on the status of uptake of CPDs by nurses and midwives, describe challenges and best practices to inform revision of strategies to further improve CPD programs for midwives and nurses in Rwanda. In addition, the results could help to explore and deploy cost-effective training delivery models for nurses and midwives in Rwanda and in other similar contexts.

    Study setting

    Rwanda’s health system has been built on the administrative scheme with referral, provincial, district, and sub-district health facilities (public and private) and, overall, has 1,695 health facilities [ 10 ]. Forty (40) of the 444 health facilities that conducted at least 20 deliveries per month from July 1st, 2020 to June 30th, 2021 nationwide were randomly selected for the study.

    Study design

    A cross-sectional study design was used to collect data at the selected health facilities. The study was conducted across selected facilities nationwide, selection was done randomly from a list of health facilities arranged per region and district to ensure equitable representation.

    Study population

    Within the randomly selected health centers, up to four HCPs (midwives and nurses) who are assigned in maternity participated in the study. In total, 93% of the targeted HCPs from the 40 Health Facilities were recruited and participated in this study. Selected health posts had only one nurse providing health care services instead of four planned for the survey. This research excluded all participants with experience less than six months at a given health facility as they may not yet have been familiar with the facility systems and various training opportunities for EmONC.

    Sample size and sampling strategy

    A random sampling method was used to enroll 40 health facilities in the study. After getting permission/authorization from District Health Offices, the recruitment was done through directors/gatekeepers of selected health facilities. The support letter from Ministry of Health was secured and was presented at each health facility chosen. Using this support letter, the researchers approached selected health facilities’ leaders (gatekeepers), including medical directors, directors of nursing and head of health centers. They asked them to assist in identifying the potential and eligible participants. Those gatekeepers did the first contact with potential participants. Data collectors ensured that each participant meet the eligibility criteria before enrolling him/her in the study to avoid selection biases.

    Data collection process

    The data collection tool was piloted/tested in one health facility one week prior to data collection. Two nurses and two midwives were selected to respond to the questionnaire for testing and validation purposes. This informed the revision and finalization of the questionnaire by the research team.

    The research team deployed two enumerators per each selected health facility to collect data using the validated questionnaire. Data collection began by an introduction of enumerators to potential respondents per the head of the health facility. Those potential respondents were provided with information about the research study including its objectives and they were given an official consent form requesting him/her to participate in this study if s/he agrees. After consenting, a questionnaire was administered to collect quantitative data related to the study objectives. The electronic questionnaire was developed and uploaded in the KoBo toolbox. KoBo toolbox is an open-source tool for collecting data using mobile phones. The questions were accessed via a web application link. The collected data were uploaded onto a password-protected Data were collected from 22 November 2021 to 26 November 2021.

    Data Analysis

    Data analysis was preceded by data quality assurance which was done regularly throughout the data collection period. This was enabled by the KoBo toolbox which would push real time data into a national server. Study questions and related data were organized, collected and analyzed according to study objectives. Data set was accessed via a web application link. The collected data were stored onto a password-protected server for confidentiality and ethical purposes. Data were analyzed using IBM SPSS statistics 27 software. The analysis generated standard descriptive and frequencies on the key assessment indicators on uptake of CPDs, accessibility to CPDs and knowledge and skills gaps on EmONC among nurses and midwives. Data are presented using tables and graphs.

    Ethical consideration

    Ethical clearances from the Rwanda National Ethical Committee was granted to researchers to conduct this study (Approval Notice: No. 984/RNEC/2021). In addition, an authorization of the health facility to collect data was secured prior to conducting interviews. Data collectors adhered to principles of confidentiality and ethics in data collection. No person’s name (except for the identification of the data collector) was recorded on any of the interviews. Permission to enter each facility, interview the different employees, and review registers was requested from the director or staff in charge of the health facility at the beginning of each visit. The data collection teams carried with them official letters of cooperation from the Ministry of Health and district level offices. Interviewees were requested to read an information note and sign a consent form before proceeding with the interview. No incentives were provided to participants hence the participation was voluntary. Respondents were able to withdraw their participation anytime during the interview. The information notes, and consent forms are added as annexes for easy reference.

    Demographic characteristics of the respondents

    The study was carried out across 40 health facilities and 149 health care providers (HCP) participated in the study (Annex 1). Of them, 99 (64.4%) were female and 50 (33.6%) were male. The highest proportion of the HCP who participated in the study were less than 30 years of age, 38 (25.5%). The majority of the participants were nurses, 104 (69.8%), while the remaining 45 (30.2%) were midwives. Among the nurses, 21 (20.2%) had a high school diploma, 74 (71.1%) had an advanced diploma, and 9 (8.7%) had a bachelor’s degree. Among midwives, 42 (93.3%) of them had advanced diplomas and 3 (6.7%) had bachelor’s degrees in midwifery. The highest proportion of the HCP who participated in the study had been working in maternity services for more than 5 years and were 96 (64.4%). The detailed results are shown in Table  1 below.

    Respondents’ obstetric experience and skills in EmONC

    Overall, the predominant number of births that the HCPs conduct themselves per week was between one and five (71.1%). Broken down by the type of health facility, a large number of the respondents said that they perform between 5 and 10 (50%) for the hospitals, between one and five (74.4%) for the health centers and between one and five (75.0%) for the health posts. The majority of the HCPs felt confident in performing a birth-related diagnosis on their own (89.9%): all the HCPs from the hospitals felt confident, while 91.5% felt confident at the health centers and none felt confident at the health posts. Similarly, those who felt confident in performing births on their own was 89.9% overall, with all of the HCP from the hospitals feeling confident, and 89.9% from the health centers and 25% from the health posts. Across all the types of facility, the majority of the HCPs agreed that birth complications while performing deliveries sometimes happen, specifically as follows: overall (90.6%), hospital (93.8%), health center (90.7%) and health post (75.0%). In terms of diagnostic and management, the majority of the HCP agreed that they feel less confident with pre-eclampsia, with 67.1% overall, 31.3% at the hospitals, 71.3% at the health centers and 75.0% at health posts. Among the newborn health concerns, the majority of the HCP were less confident in immediate care after delivery for baby who does not cry, mainly 74.5% overall, 37.5% in hospitals, 78.3% in health centers and 100% in health posts were less confident on immediate care after delivery for baby who does not cry. Table  2 shows the results in details.

    Respondents’ previous training on EmONC related topics

    Overall, the majority of the HCPs reported having never being trained on EmONC after graduation from their formal education. Apart from the hospitals, where most of the HCP were trained, in the two years (37.5%) and within 2 and 5 years (37.5%), at health centers and health posts, the majority of the HCP had never been trained on EmONC after graduation from their formal education. Among the relevant EmONC competencies, the overall majority of the HCP had been trained on B EmONC while 28.8% had not been trained on any of the topics. The proportion of HCP who were not trained on any topics is 12.5% for the hospitals, 30.2% for the health centers and 50% for the health posts. Of all the HCP, 63.1% had benefited from a mentorship program on EmONC after graduation. The Table  3 below details the findings.

    Ways respondents accumulate CPD credits

    Normally, every nurse or midwife is required to complete at least 60 credits to be able to renew his / her license. This study revealed that the HCP who had acquired less than 60 CPD credits related to EmONC training were the most predominant, with 79.9% overall, 56.3% at the hospitals, 82.2% at the health centers and 100% at the health posts. The most common way of delivering the CPD credits was through workshops (43.6%) and online training (34.5%). Most of the HCP noted that it was hard for them to achieve the required CPD credits (57.0%). The detailed results are presented in Table  4 below.

    Respondents’ digital literacy and access to digital devices

    The most common electronic device owned by the HCP was a smartphone (98.7%). Overall, there were no significant differences between the places where the HCP usually accessed the internet. However, accessing the internet at work was most frequent for the hospitals (43.7%) and health centers (46.5%), and least frequent for the health posts (0.0%). The majority of the HCP, agreed to be comfortable using a computer (57.0%), a normal feature phone (83.2%) and smartphone (86.6%) as tools that facilitate eLearning. Similarly, being comfortable with eLearning was predominant overall (51.0%), at the hospitals (75.0%) and health centers (48.8%), however majority of the heath posts HCP agreed to be somewhat comfortable with eLearning (50.0%). Furthermore, the HCP aged less than 40 years are the ones being more comfortable with Electronic-Learning. The details are provided below in Table  5 .

    Uptake of E-learning

    Of all the HCP, 72.5% use their computer or smartphone to access eLearning. Of these, a proportion (23.1%) dedicated more than 30 min per day to self-training in EmONC using their phone. However, at the hospitals, a larger proportion would dedicate a few minutes in a week (30.0%) and more than an hour in a week (30.0%). Overall, more HCP had not used eLearning as a teaching-learning method in the past two years (37.3%). However, those who had never used it are more at the health centers (39.5%) and health posts (75.0%) but the eLearning utilization is high at the hospitals with 43.8% who have used it more than four times in the past two years. Supplemental knowledge/skills in existing areas was the predominant (80.5%) expectation from the phone delivered eLearning EmONC modules, while learning new learning and teaching methods was the least expectation (51.0%). At the health centers, 55.8% of the HCP would be able and willing to self-support for their personal remote training. Inconsistently, 43.8% at the hospitals as well as 100% at the health posts would not be able and willing to self-support for their personal remote training. Table  1 in the annex shows the findings in detail.

    Respondents’ basic knowledge on EmONC

    While assessing the basic information on EmONC, specifically by assessing what aspect should be given special attention during abdominal examination of an at-term pregnant mother, the majority of the HCP agreed that all elements (fundal height, descent of the presenting part, fetal heart tones and frequency and duration of contractions) should be given special attention. However, at the health centers and health posts, there were some HCP who believe that not all of those elements should be given special attention (for example: 17.1% of the HCP from health centers agreed that only fundal height should be given special attention). In addition, the majority of the HCP defined postpartum hemorrhage as vaginal bleeding of more than 500 mL after vaginal birth, 95.3% overall, 100% at the hospitals, 84.6% at health centers and 100% at the health posts. Table  2 in the annex summarizes the findings.

    The purpose of the study was to assess the progress of uptake and accessibility of midwives and nurses to CPD and determine their knowledge and skills gaps in key competencies of EmONC in Rwanda. Generally speaking, the study findings revealed a low to moderate progress of uptake of CPDs among nurses and midwives’ health cadres due to lack of opportunity or difficult access. The study also revealed knowledge and skills gaps in critical competencies for EmONC; which calls for a particular focus on EmONC during CPD programming. The study findings indicated that the majority of nurses have never received any formal training on EmONC after graduation from their formal education. The proportion of HCP who were not trained on any topics is 12.5% for the hospitals, 30.2% for the health centers and 50% for the health posts. This calls for more investments in CPDs opportunities tailored to the needs of individual health care providers practicing EmONC in a given health facility. Whichever delivery method is used, CPD are important to keep practitioners updated and enhance their practice [ 12 ]. According to Gray et al. [ 1 ] adult learners need a structured teacher guided approach compared to the youngest clinicians who can comply with different modes of teaching, including self-directed learning. Similarly, the study revealed small progress of uptake of mentorship on EmONC with only 63.1% health providers mentored on EmONC after graduation. Small coverage of mentorship is another important missed opportunity because mentorship proved to be a cost effective approach to transfer skills between the most experienced professional to less experienced one in a more sustainable manner [ 13 , 14 , 15 ]. This might have been due to various factors including lack of systematic plan for mentorship scale up and financial constraints .

    As such, this form of training could complement the continuous professional development delivered through traditional training. Similar strategies have been implemented in other countries with similar contexts. For example, a cascade type mentorship for health workers has been implemented in Uganda and demonstrated tremendous results in terms of maternal and neonatal health outcomes. Nevertheless, given the nature and complex set of practical skills required for better management of obstetric complications, clinical mentorship for EmONC should remain erratic, dynamic and always adapted to the case in hand and to the competencies and skills gaps of the health care provider [ 13 , 16 , 17 ].

    Despite the fact that Pre/Eclampsia and Postpartum hemorrhage remain among the main killers of women during delivery at global level and in Rwanda, this study revealed knowledge and skills gaps in management of both critical obstetric conditions among nurses and midwives. Nurses and midwives reported feeling less confident in some aspects of EmONC, including management of eclampsia, and care of newborn after birth especially when babies don’t cry immediately [ 5 ]. Organizing CPDs trainings in management of these life threatening conditions could probably make health providers feel confident to provide effective quality EmONC services thus contribute to acceleration of reduction of preventable maternal and neonatal deaths in Rwanda towards achieving the SDG goals and targets [ 18 , 19 , 20 ].

    These findings correlate also with another study conducted in Australia where participants suggested more training in the management of eclampsia [ 21 ]. Moreover, the skill gaps in management of key obstetric complications could be explained by a mismatch between the CPDs and the needs on the field. CPDs opportunities are limited in Rwanda and they are not always aligned to the felt need of nurses and midwives on ground. Similar mismatches between CPD needs and supply has been reported by Feldacker et al. [ 22 ] in a study conducted in Malawi, Tanzania and South Africa due to lack of evidence-based planning and effective coordination. The Ministry of Health of Rwanda has put in place incentives to motivate health care providers to invest in lifelong learning. One of them is an instruction to earn at least 60 CPD credits before renewing the license to practice nursing and midwifery in Rwanda [ 23 ]. However, more than 79.9% face the challenges of having necessary credits when planning to renew their license to practice, depending on the working place across the health system. The study findings indicate that among respondents who reported facing challenges to earn the required minimum of CPDs credits 56.3% were from the hospitals, 82.2% from the health centers and 100% from the health posts. The most common way of delivering the CPD credits was through workshops (43.6%) and online training (34.5%). Most of the HCPs noted that it was hard for them to achieve the required CPD credits (57.0%), suggesting a need for an organizational culture that supports CPDs. For participants who received training, common methodologies were face-to-face and online teaching. Regarding the common devices participants use for online training, the majority use their smartphones, while others use a computer. Since internet connectivity is a challenge, respondents in the study mentioned using those devices mostly at the working places to benefit internet connectivity at work [ 24 ]. Similarly, Addae et al. [ 25 ] reported in a study conducted in Ghana on online learning experience for nurses and midwives’ students that the lack of internet connectivity was one of the major challenges that impeded the programme. These findings highlight untapped potential of using advanced information technological systems to deliver CPDs courses in a more cost-effective manner as virtual training represents only 34.5% due to small coverage and cost of using the internet by the health worker’s community. This calls for more investment in IT infrastructures such as optic fibers and other cost-effective means of expanding internet coverage across the country. In the meanwhile, tools such as safe delivery app [ 26 ] that provide nurses and midwives evidence-based and up-to-date clinical guidelines could be used as these don’t require internet when successfully downloaded. Findings from this study correlate with a study on the use of blended learning in nursing where poor IT skills and lack of organizational support were identified as weakness to CPD provision among nurses and midwives [ 27 , 28 , 29 ].

    The study findings call for more investment in enhancement of skills and capacities of human resource for health using more predictable and sustainable approaches such as CPDs and mentorship. This will require integration of these interventions in health sector policies and strategies, allocation of finances and strong monitoring and evaluation of the impact of these interventions on health service delivery and quality of care. Moreover, the study revealed challenges in the use of e-learning as a tool to sustain CPDs due to the limited internet connectivity. No one would deny the potential of technology in health care delivery and more lessons and best practices have been gathered during the COVID-19 pandemic era. This calls for the Government of Rwanda to continue strengthening ICT infrastructure and deploy strategies to ease the finance burden for the user.

    Strength and limitations

    This study has been conducted using a random sampling from all facilities with significant obstetric activity across the country arranged per region and district to ensure equitable representation.

    This enabled researchers to collect information from reliable sources while minimizing biases hence with increased validity and generalizability. However, since the study relied on self-assessment/reporting of knowledge and skills, the research team acknowledged some of the limitations of the study including respondent bias as participants might have reported what he/she felt could meet the expectations of the researchers. In addition, due to financial constraints, the study sample covered 40 out of 444 health facilities which could have affected the generalizability of the findings. Besides the general survey related limitation, evidence suggests that increased knowledge for healthcare professionals does not automatically translate into improvements in clinical practice or patient outcomes. Nevertheless, literature supports CPD in terms of accumulation and how clinicians apply the knowledge gained in practice [ 30 ]. Therefore, there is a need for more studies to explore and evaluate how the CPD straining translates in actual change in practices and its overall impact on reduction of preventable maternal and neonatal deaths in Rwanda.

    The main purpose of this study was to assess the CPD program for nurses and midwives in the context of EmONC in Rwanda. The findings highlight a modest progress of the program due to lack of systematic policy and strategic approach for its implementation. This resulted in skills and knowledge gaps in management of 3 critical and life threatening obstetric conditions namely Pre/Eclampsia, PPH, essential newborn care. The study revealed that Nurses and midwives face challenges to accumulate the number of credits required to renew their licenses to practice due to limited opportunities for CPDs. Online teaching was identified as an alternative methodology for CPD training delivery. However, participants mentioned internet access as a barrier to effective online learning. Therefore, the study findings are a call for policy makers to integrate CPDs for nurses and midwives in policies and strategies and allocate enough resources to ensure systematic implementation. These will require a multisector approach to ensure CPDs are prioritized and financed as well as an infrastructure system is put in place to facilitate uptake of CPDs in a cost-effective manner.

    Data Availability

    The datasets used and/or analyzed during the current study available from the corresponding author on reasonable request.

    Gray M, Rowe J, Barnes M. Continuing professional development and changed re-registration requirements: midwives’ reflections. Nurse Educ Today. 2014;34:860–5.

    Article   PubMed   Google Scholar  

    Ross K, Barr J, Stevens J. Mandatory continuing professional development requirements: what does this mean for Australian nurses. BMC Nurs. 2013;12(1). https://doi.org/10.1186/1472-6955-12-9 .

    Baloyi OB, Jarvis MA. Continuing professional development status in the World Health Organisation, Afro-region member states. Int J Africa Nurs Sci. 2020;13:1–7.

    Google Scholar  

    World Health Organization. Defining competent maternal and newborn health professionals: background document to the 2018 joint statement by WHO, UNFPA, UNICEF, ICM, ICN, FIGO and IPA: definition of skilled health personnel providing care during Childbirth. World Health Organization; 2018.

    WHO, UNFPA. (2021) Ending preventable maternal mortality (EPMM): a renewed focus for improving maternal and newborn health and welbeing. World Heal Organ 7.

    Sayinzoga F, Bijlmakers L, Van Dillen J, Mivumbi V, Ngabo F, Van Der Velden K. Maternal death audit in Rwanda 2009–2013: a nationwide facility-based retrospective cohort study. BMJ Open. 2016;6:1–8.

    Article   Google Scholar  

    World Health Organization (WHO). (2021) New global targets to prevent maternal deaths:Access to a ‘continuum of care’ needed, before, during and after pregnancy and childbirth. WHO 3–5.

    UNICEF. (2019) Healthy mothers, healthy babies: Taking stock of maternal health.

    Mlambo M, Silén C, McGrath C. Lifelong learning and nurses’ continuing professional development, a metasynthesis of the literature. BMC Nurs. 2021;20:1–13.

    National Institute of Statistics of Rwanda., Ministry of Health (MOH) [Rwanda], ICF (2020) Rwanda demographic and health survey 2019–2020: key indicators report. Kigali, Rwanda, and Rockville, Maryland, USA.

    Rwanda National Continuous Professional Development Policy. https://www.ncnm.rw/documents/CPD%20POLICY.pdf ).

    Rwanda Ministry of Health. (2021) Health Sector Annual Performance Report 2020–2021.

    Katsikitis M, Mcallister M, Sharman R, Raith L, Faithfull-Byrne A, Priaulx R. Continuing professional development in nursing in Australia: current awareness, practice and future directions. Contemp Nurse. 2013;45:33–45.

    Yamuragiye A, Ndayisenga JP, Nkurunziza A, Bazirete O. Benefits of a mentorship program on Interprofessional Collaboration in obstetric and Neonatal Care in Rwanda: a qualitative descriptive case study. Rwanda J Med Heal Sci. 2023;6:71–83.

    Nyiringango G, Kerr M, Babenko-Mould Y, Kanazayire C, Ngabonzima A. Assessing the impact of mentorship on knowledge about and self-efficacy for neonatal resuscitation among nurses and midwives in Rwanda. Nurse Educ Pract. 2021;52:103030.

    Musabwasoni MGS, Kerr M, Babenko-Mould Y, Nzayirambaho M, Ngabonzima A. Assessing the impact of mentorship on nurses’ and midwives’ knowledge and self-efficacy in managing postpartum Hemorrhage. Int J Nurs Educ Scholarsh. 2020;17:1–10.

    Ajeani J, Ayiasi RM, Tetui M, Ekirapa-Kiracho E, Namazzi G, Kananura RM, Kiwanuka SN, Beyeza-Kashesya J. A cascade model of mentorship for frontline health workers in rural health facilities in Eastern Uganda: processes, achievements and lessons. Glob Health Action. 2017. https://doi.org/10.1080/16549716.2017.1345497 .

    Article   PubMed   PubMed Central   Google Scholar  

    Ngabonzima A, Kenyon C, Kpienbaareh D, Luginaah I, Mukunde G, Hategeka C, Cechetto DF. Developing and implementing a model of equitable distribution of mentorship in districts with spatial inequities and maldistribution of human resources for maternal and newborn care in Rwanda. BMC Health Serv Res. 2021;21:1–12.

    Teekens P, Wiechula R, Cusack L. Perceptions and experiences of nurses and midwives in continuing professional development: a systematic review protocol. JBI Database Syst Rev Implement Reports. 2018;16:1758–63.

    Manzi A, Magge H, Hedt-Gauthier BL, Michaelis AP, Cyamatare FR, Nyirazinyoye L, Hirschhorn LR, Ntaganira J. Clinical mentorship to improve pediatric quality of care at the health centers in rural Rwanda: a qualitative study of perceptions and acceptability of health care workers. BMC Health Serv Res. 2014. https://doi.org/10.1186/1472-6963-14-275 .

    Manzi A, Nyirazinyoye L, Ntaganira J, Magge H, Bigirimana E, Mukanzabikeshimana L, Hirschhorn LR, Hedt-Gauthier B. Beyond coverage: improving the quality of antenatal care delivery through integrated mentorship and quality improvement at health centers in rural Rwanda. BMC Health Serv Res. 2018;18:1–8.

    Ross K, Barr J, Stevens J. Mandatory continuing professional development requirements: what does this mean for Australian nurses. BMC Nurs. 2013. https://doi.org/10.1186/1472-6955-12-9 .

    Feldacker C, Pintye J, Jacob S, Chung MH, Middleton L, Iliffe J, Kim HN. Continuing professional development for medical, nursing, and midwifery cadres in Malawi, Tanzania and South Africa: a qualitative evaluation. PLoS ONE. 2017;12:1–15.

    Rwanda National Council of Nurses and Midwives. (2016) Guidelines for CPD policy implementation. Rwanda.

    Binti M, Mustapa H, Teo YC. Enablers and barriers of continuous Professional Development (CPD) participation among nurses and midwives. Int J Nurs Educ. 2021;13:75–84.

    Addae HY, Alhassan A, Issah S, Azupogo F. Online learning experiences among nursing and midwifery students during the Covid-19 outbreak in Ghana: a cross-sectional study. Heliyon. 2022. https://doi.org/10.1016/j.heliyon.2022.e12155 .

    Maternity Foundation About the Safe Delivery App. https://www.maternity.dk/safe-delivery-app/about-the-app/background/ . Accessed 6 Apr 2016.

    Ndayisenga JP, Nkurunziza A, Mukamana D, et al. Nursing and midwifery students’ perceptions and experiences of using blended learning in Rwanda: a qualitative study. Rwanda J Med Heal Sci. 2022;5:203–15.

    Harerimana A, Gloria N, Mtshali F, et al. E-Learning in Nursing Education in Rwanda: benefits and challenges. An exploration of participants’ perceptives. IOSR J Nurs Heal Sci. 2016;5:64–92.

    Ndayisenga JP, Babenko-Mould Y, Kasine Y, Nkurunziza A, Mukamana D, Murekezi J, Tengera O, Muhayimana A. Blended teaching and learning methods in nursing and midwifery education: a scoping review of the literature. Res J Heal Sci. 2021;9:100–14.

    Download references

    Acknowledgements

    The authors express heartfelt gratitude to each individual health provider and institution that participated in this research. The authors would like to commend the Ministry of Health and the Ministry of Education for having created an enabling environment and operational framework that facilitated the conduct of this research. Any opinions stated within this document reflect those of the authors and not necessarily of the United Nations Population Fund.

    Costs related to this study were covered by voluntary contribution of authors themselves.

    Author information

    Authors and affiliations.

    Regional Office for East and Southern Africa, United Nations Population Fund, 09 Simba Road, Sunninghill, Johannesburg, South Africa

    Mathias Gakwerere

    School of Nursing and Midwifery, College of Medicine and Health Science, University of Rwanda, Kigali, Rwanda

    Jean Pierre Ndayisenga & Thiery Claudien Uhawenimana

    Arthur Labatt Family School of Nursing, Western University, London, Canada

    Jean Pierre Ndayisenga

    JSI Research & Training Institute, Inc, International Division, Washington, DC, USA

    Anaclet Ngabonzima

    School of Health Sciences, University of Rwanda, Kigali, Rwanda

    Assumpta Yamuragiye

    United Nations Population Fund, KG 7 Ave, Kigali, Rwanda

    Florien Harindimana

    College of Medicine and Health Sciences, School of Public Health, University of Rwanda, Kigali, Rwanda

    Bernard Ngabo Rwabufigiri

    You can also search for this author in PubMed   Google Scholar

    Contributions

    MG conceptualized the study, developed the protocol and data collection tools, coordinated the study implementation, analyzed the data, and drafted the manuscript. CTU contributed to the study design, data analysis and revision of the manuscript. AY was involved in manuscript drafting, data analysis and contributed to the revision of the paper. AN was involved in the study design, refinement of data collection tools, analysis of data and contributed substantially to the review of the manuscript. FH was involved in data analysis and reviewed the manuscript. Contributed to the study design, supported the analysis of the data and review of the manuscript. BRN was involved in research tools development, supervised the data collection. JPN was involved in the study design and the overall supervision of the study and critically reviewed the manuscript. All authors have read and approved the final version of manuscript for submission. All authors reviewed the manuscript.

    Corresponding author

    Correspondence to Mathias Gakwerere .

    Ethics declarations

    Ethical approval and consent to participate.

    Ethical clearance was obtained from CMHS-IRB of the University of Rwanda (Approval Notice:No 984 RNEC/2021 and the research was also permitted by the management of the study hospitals and health centers. All methods were performed in accordance with the relevant guidelines and regulations. All those nurses and midwives were above 18 years old and informed consent was obtained from all subjects. All subjects were allowed to withdraw from the research at any time during the research period.

    Consent for publication

    Not applicable.

    Competing interests

    The authors declare no competing interests.

    Additional information

    Publisher’s note.

    Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

    Electronic supplementary material

    Below is the link to the electronic supplementary material.

    Supplementary Material 1

    Supplementary material 2, rights and permissions.

    Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

    Reprints and permissions

    About this article

    Cite this article.

    Gakwerere, M., Ndayisenga, J.P., Ngabonzima, A. et al. Access to continuous professional development for capacity building among nurses and midwives providing emergency obstetric and neonatal care in Rwanda. BMC Health Serv Res 24 , 394 (2024). https://doi.org/10.1186/s12913-023-10440-8

    Download citation

    Received : 05 May 2023

    Accepted : 05 December 2023

    Published : 29 March 2024

    DOI : https://doi.org/10.1186/s12913-023-10440-8

    Share this article

    Anyone you share the following link with will be able to read this content:

    Sorry, a shareable link is not currently available for this article.

    Provided by the Springer Nature SharedIt content-sharing initiative

    BMC Health Services Research

    ISSN: 1472-6963

    types of research studies in education

    IMAGES

    1. Types of Research by Method

      types of research studies in education

    2. Types of Research

      types of research studies in education

    3. Types of Research Methodology: Uses, Types & Benefits

      types of research studies in education

    4. Research

      types of research studies in education

    5. Five Basic Types of Research Studies

      types of research studies in education

    6. Different Types of Research

      types of research studies in education

    VIDEO

    1. Lecture 01: Basics of Research

    2. GSET

    3. Research, Educational research

    4. 01. Research Methodology Types and Evaluating Research

    5. 3 Types of Educational Research

    6. Review Studies Part1 (Types of Review)

    COMMENTS

    1. Types of Research

      General Types of Educational Research. Descriptive — survey, historical, content analysis, qualitative (ethnographic, narrative, phenomenological, grounded theory, and case study) Associational — correlational, causal-comparative. Intervention — experimental, quasi-experimental, action research (sort of)

    2. Educational research

      Educational research refers to the systematic collection and analysis of data related to the field of education. Research may involve a variety of methods and various aspects of education including student learning, interaction, teaching methods, teacher training, and classroom dynamics.. Educational researchers generally agree that research should be rigorous and systematic.

    3. PDF Common Guidelines for Education Research and Development

      Research contribute Type seek to test, to improved. #1: Foundational of systems or learning processes. Research. in methodologies and/o technologies of teaching provides or education the fundamental that will influence and infor outcomes. and may knowledge development in contexts. research and innovations of.

    4. What is Educational Research? + [Types, Scope & Importance]

      Educational research is interdisciplinary in nature because it draws from different fields and studies complex factual relations. Types of Educational Research Educational research can be broadly categorized into 3 which are descriptive research, correlational research, and experimental research. Each of these has distinct and overlapping features.

    5. Navigating Through the Types of Studies in Educational Research

      The content outlines the classification of educational research studies into quantitative and qualitative categories, detailing experimental, quasi-experimental, correlational, survey, case studies, documentary analysis, developmental, ethnographic, historical, and philosophical research. Each type is briefly explained, emphasizing their distinct methodologies and contributions to ...

    6. What is Education Research?

      Share. Education research is the scientific field of study that examines education and learning processes and the human attributes, interactions, organizations, and institutions that shape educational outcomes. Scholarship in the field seeks to describe, understand, and explain how learning takes place throughout a person's life and how ...

    7. Research in Education: Sage Journals

      Research in Education provides a space for fully peer-reviewed, critical, trans-disciplinary, debates on theory, policy and practice in relation to Education. International in scope, we publish challenging, well-written and theoretically innovative contributions that question and explore the concept, practice and institution of Education as an object of study.

    8. Introduction to Education Research

      The methods implemented for an educational research study should align with its set goals and objectives. This means ensuring that a proposed study design can answer the research question and that the statistical analyses are appropriate . When designing a research study and determining its research methods, the following questions should be ...

    9. Education Research and Methods

      Education Research and Methods. IES seeks to improve the quality of education for all students—prekindergarten through postsecondary and adult education—by supporting education research and the development of tools that education scientists need to conduct rigorous, applied research. Such research aims to advance our understanding of and ...

    10. Methodologies for Conducting Education Research

      Presents an overview of qualitative, quantitative and mixed-methods research designs, including how to choose the design based on the research question. This book is particularly helpful for those who want to design mixed-methods studies. Green, J. L., G. Camilli, and P. B. Elmore. 2006. Handbook of complementary methods for research in education.

    11. Educational research: some basic concepts and terminology ...

      A more widely applied way of classifying educational research studies is to define the various types of research according to the kinds of information that they provide. Accordingly, educational research studies may be classified as follows: 1.

    12. Designs for the Conduct of Scientific Research in Education

      The salient features of education delineated in Chapter 4 and the guiding principles of scientific research laid out in Chapter 3 set boundaries for the design and conduct of scientific education research. Thus, the design of a study (e.g., randomized experiment, ethnography, multiwave survey) does not itself make it scientific.

    13. Educational Research Design

      2. Survey and Correlational Research Design. A nonexperimental research design used to describe an individual or a group by having participants complete a survey or questionnaire. A correlational design uses data to determine if two or more factors are related/correlated. 3. Qualitative Research. Process of collecting, analyzing, and ...

    14. 5 Different Types of Educational Research

      The data collection method of quantitative research is more structured than qualitative ones. This research method includes different forms of surveys, e.g., online, mobile, paper and kiosk surveys. Others include face to face, telephone interviews, online polls, website interceptors, and longitudinal studies. 5. Mixed Educational Research.

    15. What types of studies are there?

      Created: June 15, 2016; Last Update: September 8, 2016; Next update: 2020. There are various types of scientific studies such as experiments and comparative analyses, observational studies, surveys, or interviews. The choice of study type will mainly depend on the research question being asked. When making decisions, patients and doctors need ...

    16. Observing Schools and Classrooms

      Observation is one way for researchers to seek to understand and interpret situations based on the social and cultural meanings of those involved. In the field of education, observation can be a meaningful tool for understanding the experiences of teachers, students, caregivers, and administrators. Rigorous qualitative research is long-term ...

    17. Research Methods

      Education: Research methods are used in education to understand how students learn, how teachers teach, and how educational policies affect student outcomes. Researchers may use surveys, experiments, and observational studies to collect data on student performance, teacher effectiveness, and educational programs.

    18. Types of Educational Research

      Examples of applied research include studies on the effectiveness of teaching methods, interventions for improving student motivation, and assessments of educational programs and policies. Action research: Action research is a type of research that is conducted by educators in their own classrooms or educational settings.

    19. (Pdf) Types of Educational Research

      The presentation was done at Department of Educational Studies, School of Education, Mahatma Gandhi Central University, Bihar, India on 09.03.2021 from 3.30 PM to 4.30 PM. Read more Data

    20. Types of Research Studies

      a form of inquiry in which qualitative research findings about a process or experience are aggregated or integrated across research studies. Aims can involve synthesizing qualitative findings across primary studies, generating new theoretical or conceptual models, identifying gaps in research, or generating new questions.

    21. Types of studies and research design

      Types of study design. Medical research is classified into primary and secondary research. Clinical/experimental studies are performed in primary research, whereas secondary research consolidates available studies as reviews, systematic reviews and meta-analyses. Three main areas in primary research are basic medical research, clinical research ...

    22. Frontiers

      Science education often aims to increase learners' acquisition of fundamental principles, such as learning the basic steps of scientific methods. Worked examples (WE) have proven particularly useful for supporting the development of such cognitive schemas and successive actions in order to avoid using up more cognitive resources than are necessary. Therefore, we investigated the extent to ...

    23. 1.9: Types of Research Studies and How To Interpret Them

      A meta-analysis is a type of systematic review that goes one step further, combining the data from multiple studies and using statistics to summarize it, as if creating a mega-study from many smaller studies.4. However, even systematic reviews and meta-analyses aren't the final word on scientific questions.

    24. 5 Popular Education Beliefs That Aren't Backed by Research

      Research confirms the theory. A 2019 study pitted off-task doodling against typical learning activities like "task-related drawing" and writing. In three separate but related experiments, task-related drawing and writing beat out doodling in terms of recall—by margins as large as 300%.

    25. Cybersecurity Degree Guide: Degree Types, Specializations And ...

      Learn about the different types of cybersecurity degrees, plus focus areas and career pathways. ... according to research by ISC2. ... Some online bootcamps even offer self-paced study options ...

    26. University of Pittsburgh Awards Tenure to Ann Cohen, PhD

      We are delighted to announce that Ann Cohen, PhD, has received conferral of tenure at the rank of associate professor by the University of Pittsburgh. Dr. Cohen is widely recognized for her outstanding work in the use of positron emission tomography (PET) imaging in the early detection of Alzheimer's disease prior to the emergence of cognitive symptoms. Recently, she has expanded this work ...

    27. 8 Time Management Tips for Students

      Attending classes, studying for exams, making friends, and taking time to relax and decompress can quickly fill up your schedule. If you often find yourself wishing there were more hours in the day, this guide will offer time management tips for students so you can accomplish what you need to get done, have fun with your friends, and gain back some valuable time for yourself.

    28. Land

      The assessment of suitability is the cornerstone for the development of ecotourism in nature reserves. This paper adopts the Delphi method to invite 30 experts to score and screen a series of indicators and then calculates the weight of each indicator through the hierarchical analysis method (AHP) to establish a comprehensive evaluation index system for the suitability of ecotourism development.

    29. Access to continuous professional development for capacity building

      Study setting. Rwanda's health system has been built on the administrative scheme with referral, provincial, district, and sub-district health facilities (public and private) and, overall, has 1,695 health facilities [].Forty (40) of the 444 health facilities that conducted at least 20 deliveries per month from July 1st, 2020 to June 30th, 2021 nationwide were randomly selected for the study.