• Search Menu
  • Advance articles
  • Author Guidelines
  • Submission Site
  • Open Access
  • Why Publish?
  • About Research Evaluation
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Article Contents

1. introduction, 4. synthesis, 4.1 principles of tdr quality, 5. conclusions, supplementary data, acknowledgements, defining and assessing research quality in a transdisciplinary context.

  • Article contents
  • Figures & tables
  • Supplementary Data

Brian M. Belcher, Katherine E. Rasmussen, Matthew R. Kemshaw, Deborah A. Zornes, Defining and assessing research quality in a transdisciplinary context, Research Evaluation , Volume 25, Issue 1, January 2016, Pages 1–17, https://doi.org/10.1093/reseval/rvv025

  • Permissions Icon Permissions

Research increasingly seeks both to generate knowledge and to contribute to real-world solutions, with strong emphasis on context and social engagement. As boundaries between disciplines are crossed, and as research engages more with stakeholders in complex systems, traditional academic definitions and criteria of research quality are no longer sufficient—there is a need for a parallel evolution of principles and criteria to define and evaluate research quality in a transdisciplinary research (TDR) context. We conducted a systematic review to help answer the question: What are appropriate principles and criteria for defining and assessing TDR quality? Articles were selected and reviewed seeking: arguments for or against expanding definitions of research quality, purposes for research quality evaluation, proposed principles of research quality, proposed criteria for research quality assessment, proposed indicators and measures of research quality, and proposed processes for evaluating TDR. We used the information from the review and our own experience in two research organizations that employ TDR approaches to develop a prototype TDR quality assessment framework, organized as an evaluation rubric. We provide an overview of the relevant literature and summarize the main aspects of TDR quality identified there. Four main principles emerge: relevance, including social significance and applicability; credibility, including criteria of integration and reflexivity, added to traditional criteria of scientific rigor; legitimacy, including criteria of inclusion and fair representation of stakeholder interests, and; effectiveness, with criteria that assess actual or potential contributions to problem solving and social change.

Contemporary research in the social and environmental realms places strong emphasis on achieving ‘impact’. Research programs and projects aim to generate new knowledge but also to promote and facilitate the use of that knowledge to enable change, solve problems, and support innovation ( Clark and Dickson 2003 ). Reductionist and purely disciplinary approaches are being augmented or replaced with holistic approaches that recognize the complex nature of problems and that actively engage within complex systems to contribute to change ‘on the ground’ ( Gibbons et al. 1994 ; Nowotny, Scott and Gibbons 2001 , Nowotny, Scott and Gibbons 2003 ; Klein 2006 ; Hemlin and Rasmussen 2006 ; Chataway, Smith and Wield 2007 ; Erno-Kjolhede and Hansson 2011 ). Emerging fields such as sustainability science have developed out of a need to address complex and urgent real-world problems ( Komiyama and Takeuchi 2006 ). These approaches are inherently applied and transdisciplinary, with explicit goals to contribute to real-world solutions and strong emphasis on context and social engagement ( Kates 2000 ).

While there is an ongoing conceptual and theoretical debate about the nature of the relationship between science and society (e.g. Hessels 2008 ), we take a more practical starting point based on the authors’ experience in two research organizations. The first author has been involved with the Center for International Forestry Research (CIFOR) for almost 20 years. CIFOR, as part of the Consultative Group on International Agricultural Research (CGIAR), began a major transformation in 2010 that shifted the emphasis from a primary focus on delivering high-quality science to a focus on ‘…producing, assembling and delivering, in collaboration with research and development partners, research outputs that are international public goods which will contribute to the solution of significant development problems that have been identified and prioritized with the collaboration of developing countries.’ ( CGIAR 2011 ). It was always intended that CGIAR research would be relevant to priority development and conservation issues, with emphasis on high-quality scientific outputs. The new approach puts much stronger emphasis on welfare and environmental results; research centers, programs, and individual scientists now assume shared responsibility for achieving development outcomes. This requires new ways of working, with more and different kinds of partnerships and more deliberate and strategic engagement in social systems.

Royal Roads University (RRU), the home institute of all four authors, is a relatively new (created in 1995) public university in Canada. It is deliberately interdisciplinary by design, with just two faculties (Faculty of Social and Applied Science; Faculty of Management) and strong emphasis on problem-oriented research. Faculty and student research is typically ‘applied’ in the Organization for Economic Co-operation and Development (2012) sense of ‘original investigation undertaken in order to acquire new knowledge … directed primarily towards a specific practical aim or objective’.

An increasing amount of the research done within both of these organizations can be classified as transdisciplinary research (TDR). TDR crosses disciplinary and institutional boundaries, is context specific, and problem oriented ( Klein 2006 ; Carew and Wickson 2010 ). It combines and blends methodologies from different theoretical paradigms, includes a diversity of both academic and lay actors, and is conducted with a range of research goals, organizational forms, and outputs ( Klein 2006 ; Boix-Mansilla 2006a ; Erno-Kjolhede and Hansson 2011 ). The problem-oriented nature of TDR and the importance placed on societal relevance and engagement are broadly accepted as defining characteristics of TDR ( Carew and Wickson 2010 ).

The experience developing and using TDR approaches at CIFOR and RRU highlights the need for a parallel evolution of principles and criteria for evaluating research quality in a TDR context. Scientists appreciate and often welcome the need and the opportunity to expand the reach of their research, to contribute more effectively to change processes. At the same time, they feel the pressure of added expectations and are looking for guidance.

In any activity, we need principles, guidelines, criteria, or benchmarks that can be used to design the activity, assess its potential, and evaluate its progress and accomplishments. Effective research quality criteria are necessary to guide the funding, management, ongoing development, and advancement of research methods, projects, and programs. The lack of quality criteria to guide and assess research design and performance is seen as hindering the development of transdisciplinary approaches ( Bergmann et al. 2005 ; Feller 2006 ; Chataway, Smith and Wield 2007 ; Ozga 2008 ; Carew and Wickson 2010 ; Jahn and Keil 2015 ). Appropriate quality evaluation is essential to ensure that research receives support and funding, and to guide and train researchers and managers to realize high-quality research ( Boix-Mansilla 2006a ; Klein 2008 ; Aagaard-Hansen and Svedin 2009 ; Carew and Wickson 2010 ).

Traditional disciplinary research is built on well-established methodological and epistemological principles and practices. Within disciplinary research, quality has been defined narrowly, with the primary criteria being scientific excellence and scientific relevance ( Feller 2006 ; Chataway, Smith and Wield 2007 ; Erno-Kjolhede and Hansson 2011 ). Disciplines have well-established (often implicit) criteria and processes for the evaluation of quality in research design ( Erno-Kjolhede and Hansson 2011 ). TDR that is highly context specific, problem oriented, and includes nonacademic societal actors in the research process is challenging to evaluate ( Wickson, Carew and Russell 2006 ; Aagaard-Hansen and Svedin 2009 ; Andrén 2010 ; Carew and Wickson 2010 ; Huutoniemi 2010 ). There is no one definition or understanding of what constitutes quality, nor a set guide for how to do TDR ( Lincoln 1995 ; Morrow 2005 ; Oberg 2008 ; Andrén 2010 ; Huutoniemi 2010 ). When epistemologies and methods from more than one discipline are used, disciplinary criteria may be insufficient and criteria from more than one discipline may be contradictory; cultural conflicts can arise as a range of actors use different terminology for the same concepts or the same terminology for different concepts ( Chataway, Smith and Wield 2007 ; Oberg 2008 ).

Current research evaluation approaches as applied to individual researchers, programs, and research units are still based primarily on measures of academic outputs (publications and the prestige of the publishing journal), citations, and peer assessment ( Boix-Mansilla 2006a ; Feller 2006 ; Erno-Kjolhede and Hansson 2011 ). While these indicators of research quality remain relevant, additional criteria are needed to address the innovative approaches and the diversity of actors, outputs, outcomes, and long-term social impacts of TDR. It can be difficult to find appropriate outlets for TDR publications simply because the research does not meet the expectations of traditional discipline-oriented journals. Moreover, a wider range of inputs and of outputs means that TDR may result in fewer academic outputs. This has negative implications for transdisciplinary researchers, whose performance appraisals and long-term career progression are largely governed by traditional publication and citation-based metrics of evaluation. Research managers, peer reviewers, academic committees, and granting agencies all struggle with how to evaluate and how to compare TDR projects ( ex ante or ex post ) in the absence of appropriate criteria to address epistemological and methodological variability. The extent of engagement of stakeholders 1 in the research process will vary by project, from information sharing through to active collaboration ( Brandt et al. 2013) , but at any level, the involvement of stakeholders adds complexity to the conceptualization of quality. We need to know what ‘good research’ is in a transdisciplinary context.

As Tijssen ( 2003 : 93) put it: ‘Clearly, in view of its strategic and policy relevance, developing and producing generally acceptable measures of “research excellence” is one of the chief evaluation challenges of the years to come’. Clear criteria are needed for research quality evaluation to foster excellence while supporting innovation: ‘A principal barrier to a broader uptake of TD research is a lack of clarity on what good quality TD research looks like’ ( Carew and Wickson 2010 : 1154). In the absence of alternatives, many evaluators, including funding bodies, rely on conventional, discipline-specific measures of quality which do not address important aspects of TDR.

There is an emerging literature that reviews, synthesizes, or empirically evaluates knowledge and best practice in research evaluation in a TDR context and that proposes criteria and evaluation approaches ( Defila and Di Giulio 1999 ; Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Klein 2008 ; Carew and Wickson 2010 ; ERIC 2010; de Jong et al. 2011 ; Spaapen and Van Drooge 2011 ). Much of it comes from a few fields, including health care, education, and evaluation; little comes from the natural resource management and sustainability science realms, despite these areas needing guidance. National-scale reviews have begun to recognize the need for broader research evaluation criteria but have had difficulty dealing with it and have made little progress in addressing it ( Donovan 2008 ; KNAW 2009 ; REF 2011 ; ARC 2012 ; TEC 2012 ). A summary of the national reviews that we reviewed in the development of this research is provided in Supplementary Appendix 1 . While there are some published evaluation schemes for TDR and interdisciplinary research (IDR), there is ‘substantial variation in the balance different authors achieve between comprehensiveness and over-prescription’ ( Wickson and Carew 2014 : 256) and still a need to develop standardized quality criteria that are ‘uniquely flexible to provide valid, reliable means to evaluate and compare projects, while not stifling the evolution and responsiveness of the approach’ ( Wickson and Carew 2014 : 256).

There is a need and an opportunity to synthesize current ideas about how to define and assess quality in TDR. To address this, we conducted a systematic review of the literature that discusses the definitions of research quality as well as the suggested principles and criteria for assessing TDR quality. The aim is to identify appropriate principles and criteria for defining and measuring research quality in a transdisciplinary context and to organize those principles and criteria as an evaluation framework.

The review question was: What are appropriate principles, criteria, and indicators for defining and assessing research quality in TDR?

This article presents the method used for the systematic review and our synthesis, followed by key findings. Theoretical concepts about why new principles and criteria are needed for TDR, along with associated discussions about evaluation process are presented. A framework, derived from our synthesis of the literature, of principles and criteria for TDR quality evaluation is presented along with guidance on its application. Finally, recommendations for next steps in this research and needs for future research are discussed.

2.1 Systematic review

Systematic review is a rigorous, transparent, and replicable methodology that has become widely used to inform evidence-based policy, management, and decision making ( Pullin and Stewart 2006 ; CEE 2010). Systematic reviews follow a detailed protocol with explicit inclusion and exclusion criteria to ensure a repeatable and comprehensive review of the target literature. Review protocols are shared and often published as peer reviewed articles before undertaking the review to invite critique and suggestions. Systematic reviews are most commonly used to synthesize knowledge on an empirical question by collating data and analyses from a series of comparable studies, though methods used in systematic reviews are continually evolving and are increasingly being developed to explore a wider diversity of questions ( Chandler 2014 ). The current study question is theoretical and methodological, not empirical. Nevertheless, with a diverse and diffuse literature on the quality of TDR, a systematic review approach provides a method for a thorough and rigorous review. The protocol is published and available at http://www.cifor.org/online-library/browse/view-publication/publication/4382.html . A schematic diagram of the systematic review process is presented in Fig. 1 .

Search process.

Search process.

2.2 Search terms

Search terms were designed to identify publications that discuss the evaluation or assessment of quality or excellence 2 of research 3 that is done in a TDR context. Search terms are listed online in Supplementary Appendices 2 and 3 . The search strategy favored sensitivity over specificity to ensure that we captured the relevant information.

2.3 Databases searched

ISI Web of Knowledge (WoK) and Scopus were searched between 26 June 2013 and 6 August 2013. The combined searches yielded 15,613 unique citations. Additional searches to update the first searchers were carried out in June 2014 and March 2015, for a total of 19,402 titles scanned. Google Scholar (GS) was searched separately by two reviewers during each search period. The first reviewer’s search was done on 2 September 2013 (Search 1) and 3 September 2013 (Search 2), yielding 739 and 745 titles, respectively. The second reviewer’s search was done on 19 November 2013 (Search 1) and 25 November 2013 (Search 2), yielding 769 and 774 titles, respectively. A third search done on 17 March 2015 by one reviewer yielded 98 new titles. Reviewers found high redundancy between the WoK/Scopus searches and the GS searches.

2.4 Targeted journal searches

Highly relevant journals, including Research Evaluation, Evaluation and Program Planning, Scientometrics, Research Policy, Futures, American Journal of Evaluation, Evaluation Review, and Evaluation, were comprehensively searched using broader, more inclusive search strings that would have been unmanageable for the main database search.

2.5 Supplementary searches

References in included articles were reviewed to identify additional relevant literature. td-net’s ‘Tour d’Horizon of Literature’, lists important inter- and transdisciplinary publications collected through an invitation to experts in the field to submit publications ( td-net 2014 ). Six additional articles were identified via supplementary search.

2.6 Limitations of coverage

The review was limited to English-language published articles and material available through internet searches. There was no systematic way to search the gray (unpublished) literature, but relevant material identified through supplementary searches was included.

2.7 Inclusion of articles

This study sought articles that review, critique, discuss, and/or propose principles, criteria, indicators, and/or measures for the evaluation of quality relevant to TDR. As noted, this yielded a large number of titles. We then selected only those articles with an explicit focus on the meaning of IDR and/or TDR quality and how to achieve, measure or evaluate it. Inclusion and exclusion criteria were developed through an iterative process of trial article screening and discussion within the research team. Through this process, inter-reviewer agreement was tested and strengthened. Inclusion criteria are listed in Tables 1 and 2 .

Inclusion criteria for title and abstract screening

Inclusion criteria for abstract and full article screening

Article screening was done in parallel by two reviewers in three rounds: (1) title, (2) abstract, and (3) full article. In cases of uncertainty, papers were included to the next round. Final decisions on inclusion of contested papers were made by consensus among the four team members.

2.8 Critical appraisal

In typical systematic reviews, individual articles are appraised to ensure that they are adequate for answering the research question and to assess the methods of each study for susceptibility to bias that could influence the outcome of the review (Petticrew and Roberts 2006). Most papers included in this review are theoretical and methodological papers, not empirical studies. Most do not have explicit methods that can be appraised with existing quality assessment frameworks. Our critical appraisal considered four criteria adapted from Spencer et al. (2003): (1) relevance to the review question, (2) clarity and logic of how information in the paper was generated, (3) significance of the contribution (are new ideas offered?), and (4) generalizability (is the context specified; do the ideas apply in other contexts?). Disagreements were discussed to reach consensus.

2.9 Data extraction and management

The review sought information on: arguments for or against expanding definitions of research quality, purposes for research quality evaluation, principles of research quality, criteria for research quality assessment, indicators and measures of research quality, and processes for evaluating TDR. Four reviewers independently extracted data from selected articles using the parameters listed in Supplementary Appendix 4 .

2.10 Data synthesis and TDR framework design

Our aim was to synthesize ideas, definitions, and recommendations for TDR quality criteria into a comprehensive and generalizable framework for the evaluation of quality in TDR. Key ideas were extracted from each article and summarized in an Excel database. We classified these ideas into themes and ultimately into overarching principles and associated criteria of TDR quality organized as a rubric ( Wickson and Carew 2014 ). Definitions of each principle and criterion were developed and rubric statements formulated based on the literature and our experience. These criteria (adjusted appropriately to be applied ex ante or ex post ) are intended to be used to assess a TDR project. The reviewer should consider whether the project fully satisfies, partially satisfies, or fails to satisfy each criterion. More information on application is provided in Section 4.3 below.

We tested the framework on a set of completed RRU graduate theses that used transdisciplinary approaches, with an explicit problem orientation and intent to contribute to social or environmental change. Three rounds of testing were done, with revisions after each round to refine and improve the framework.

3.1 Overview of the selected articles

Thirty-eight papers satisfied the inclusion criteria. A wide range of terms are used in the selected papers, including: cross-disciplinary; interdisciplinary; transdisciplinary; methodological pluralism; mode 2; triple helix; and supradisciplinary. Eight included papers specifically focused on sustainability science or TDR in natural resource management, or identified sustainability research as a growing TDR field that needs new forms of evaluation ( Cash et al. 2002 ; Bergmann et al. 2005 ; Chataway, Smith and Wield 2007 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Andrén 2010 ; Carew and Wickson 2010 ; Lang et al. 2012 ; Gaziulusoy and Boyle 2013 ). Carew and Wickson (2010) build on the experience in the TDR realm to propose criteria and indicators of quality for ‘responsible research and innovation’.

The selected articles are written from three main perspectives. One set is primarily interested in advancing TDR approaches. These papers recognize the need for new quality measures to encourage and promote high-quality research and to overcome perceived biases against TDR approaches in research funding and publishing. A second set of papers is written from an evaluation perspective, with a focus on improving evaluation of TDR. The third set is written from the perspective of qualitative research characterized by methodological pluralism, with many characteristics and issues relevant to TDR approaches.

The majority of the articles focus at the project scale, some at the organization level, and some do not specify. Some articles explicitly focus on ex ante evaluation (e.g. proposal evaluation), others on ex post evaluation, and many are not explicit about the project stage they are concerned with. The methods used in the reviewed articles include authors’ reflection and opinion, literature review, expert consultation, document analysis, and case study. Summaries of report characteristics are available online ( Supplementary Appendices 5–8 ). Eight articles provide comprehensive evaluation frameworks and quality criteria specifically for TDR and research-in-context. The rest of the articles discuss aspects of quality related to TDR and recommend quality definitions, criteria, and/or evaluation processes.

3.2 The need for quality criteria and evaluation methods for TDR

Many of the selected articles highlight the lack of widely agreed principles and criteria of TDR quality. They note that, in the absence of TDR quality frameworks, disciplinary criteria are used ( Morrow 2005 ; Boix-Mansilla 2006a , b ; Feller 2006 ; Klein 2006 , 2008 ; Wickson, Carew and Russell 2006 ; Scott 2007 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Oberg 2008 ; Erno-Kjolhede and Hansson 2011 ), and evaluations are often carried out by reviewers who lack cross-disciplinary experience and do not have a shared understanding of quality ( Aagaard-Hansen and Svedin 2009 ). Quality is discussed by many as a relative concept, developed within disciplines, and therefore defined and understood differently in each field ( Morrow 2005 ; Klein 2006 ; Oberg 2008 ; Mitchell and Willets 2009 ; Huutoniemi 2010 ; Hellstrom 2011 ). Jahn and Keil (2015) point out the difficulty of creating a common set of quality criteria for TDR in the absence of a standard agreed-upon definition of TDR. Many of the selected papers argue the need to move beyond narrowly defined ideas of ‘scientific excellence’ to incorporate a broader assessment of quality which includes societal relevance ( Hemlin and Rasmussen 2006 ; Chataway, Smith and Wield 2007 ; Ozga 2007 ; Spaapen, Dijstelbloem and Wamelink 2007 ). This shift includes greater focus on research organization, research process, and continuous learning, rather than primarily on research outputs ( Hemlin and Rasmussen 2006 ; de Jong et al. 2011 ; Wickson and Carew 2014 ; Jahn and Keil 2015 ). This responds to and reflects societal expectations that research should be accountable and have demonstrated utility ( Cloete 1997 ; Defila and Di Giulio 1999 ; Wickson, Carew and Russell 2006 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Stige 2009 ).

A central aim of TDR is to achieve socially relevant outcomes, and TDR quality criteria should demonstrate accountability to society ( Cloete 1997 ; Hemlin and Rasmussen 2006 ; Chataway, Smith and Wield 2007 ; Ozga 2007 ; Spaapen, Dijstelbloem and Wamelink 2007 ; de Jong et al. 2011 ). Integration and mutual learning are a core element of TDR; it is not enough to transcend boundaries and incorporate societal knowledge but, as Carew and Wickson ( 2010 : 1147) summarize: ‘…the TD researcher needs to put effort into integrating these potentially disparate knowledges with a view to creating useable knowledge. That is, knowledge that can be applied in a given problem context and has some prospect of producing desired change in that context’. The inclusion of societal actors in the research process, the unique and often dispersed organization of research teams, and the deliberate integration of different traditions of knowledge production all fall outside of conventional assessment criteria ( Feller 2006 ).

Not only do the range of criteria need to be updated, expanded, agreed upon, and assumptions made explicit ( Boix-Mansilla 2006a ; Klein 2006 ; Scott 2007 ) but, given the specific problem orientation of TDR, reviewers beyond disciplinary academic peers need to be included in the assessment of quality ( Cloete 1997 ; Scott 2007 ; Spappen et al. 2007 ; Klein 2008 ). Several authors discuss the lack of reviewers with strong cross-disciplinary experience ( Aagaard-Hansen and Svedin 2009 ) and the lack of common criteria, philosophical foundations, and language for use by peer reviewers ( Klein 2008 ; Aagaard-Hansen and Svedin 2009 ). Peer review of TDR could be improved with explicit TDR quality criteria, and appropriate processes in place to ensure clear dialog between reviewers.

Finally, there is the need for increased emphasis on evaluation as part of the research process ( Bergmann et al. 2005 ; Hemlin and Rasmussen 2006 ; Meyrick 2006 ; Chataway, Smith and Wield 2007 ; Stige, Malterud and Midtgarden 2009 ; Hellstrom 2011 ; Lang et al. 2012 ; Wickson and Carew 2014 ). This is particularly true in large, complex, problem-oriented research projects. Ongoing monitoring of the research organization and process contributes to learning and adaptive management while research is underway and so helps improve quality. As stated by Wickson and Carew ( 2014 : 262): ‘We believe that in any process of interpreting, rearranging and/or applying these criteria, open negotiation on their meaning and application would only positively foster transformative learning, which is a valued outcome of good TD processes’.

3.3 TDR quality criteria and assessment approaches

Many of the papers provide quality criteria and/or describe constituent parts of quality. Aagaard-Hansen and Svedin (2009) define three key aspects of quality: societal relevance, impact, and integration. Meyrick (2006) states that quality research is transparent and systematic. Boaz and Ashby (2003) describe quality in four dimensions: methodological quality, quality of reporting, appropriateness of methods, and relevance to policy and practice. Although each article deconstructs quality in different ways and with different foci and perspectives, there is significant overlap and recurring themes in the papers reviewed. There is a broadly shared perspective that TDR quality is a multidimensional concept shaped by the specific context within which research is done ( Spaapen, Dijstelbloem and Wamelink 2007 ; Klein 2008 ), making a universal definition of TDR quality difficult or impossible ( Huutoniemi 2010 ).

Huutoniemi (2010) identifies three main approaches to conceptualizing quality in IDR and TDR: (1) using existing disciplinary standards adapted as necessary for IDR; (2) building on the quality standards of disciplines while fundamentally incorporating ways to deal with epistemological integration, problem focus, context, stakeholders, and process; and (3) radical departure from any disciplinary orientation in favor of external, emergent, context-dependent quality criteria that are defined and enacted collaboratively by a community of users.

The first approach is prominent in current research funding and evaluation protocols. Conservative approaches of this kind are criticized for privileging disciplinary research and for failing to provide guidance and quality control for transdisciplinary projects. The third approach would ‘undermine the prevailing status of disciplinary standards in the pursuit of a non-disciplinary, integrated knowledge system’ ( Huutoniemi 2010 : 313). No predetermined quality criteria are offered, only contextually embedded criteria that need to be developed within a specific research project. To some extent, this is the approach taken by Spaapen, Dijstelbloem and Wamelink (2007) and de Jong et al. (2011) . Such a sui generis approach cannot be used to compare across projects. Most of the reviewed papers take the second approach, and recommend TDR quality criteria that build on a disciplinary base.

Eight articles present comprehensive frameworks for quality evaluation, each with a unique approach, perspective, and goal. Two of these build comprehensive lists of criteria with associated questions to be chosen based on the needs of the particular research project ( Defila and Di Giulio 1999 ; Bergmann et al. 2005 ). Wickson and Carew (2014) develop a reflective heuristic tool with questions to guide researchers through ongoing self-evaluation. They also list criteria for external evaluation and to compare between projects. Spaapen, Dijstelbloem and Wamelink (2007) design an approach to evaluate a research project against its own goals and is not meant to compare between projects. Wickson and Carew (2014) developed a comprehensive rubric for the evaluation of Research and Innovation that builds of their extensive previous work in TDR. Finally, Lang et al. (2012) , Mitchell and Willets (2009) , and Jahn and Keil (2015) develop criteria checklists that can be applied across transdisciplinary projects.

Bergmann et al. (2005) and Carew and Wickson (2010) organize their frameworks into managerial elements of the research project, concerning problem context, participation, management, and outcomes. Lang et al. (2012) and Defila and Di Giulio (1999) focus on the chronological stages in the research process and identify criteria at each stage. Mitchell and Willets (2009) , , with a focus on doctoral s tudies, adapt standard dissertation evaluation criteria to accommodate broader, pluralistic, and more complex studies. Spaapen, Dijstelbloem and Wamelink (2007) focus on evaluating ‘research-in-context’. Wickson and Carew (2014) created a rubric based on criteria that span the research process, stages, and all actors included. Jahn and Keil (2015) organized their quality criteria into three categories of quality including: quality of the research problems, quality of the research process, and quality of the research results.

The remaining papers highlight key themes that must be considered in TDR evaluation. Dominant themes include: engagement with problem context, collaboration and inclusion of stakeholders, heightened need for explicit communication and reflection, integration of epistemologies, recognition of diverse outputs, the focus on having an impact, and reflexivity and adaptation throughout the process. The focus on societal problems in context and the increased engagement of stakeholders in the research process introduces higher levels of complexity that cannot be accommodated by disciplinary standards ( Defila and Di Giulio 1999 ; Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Klein 2008 ).

Finally, authors discuss process ( Defila and Di Giulio 1999 ; Bergmann et al. 2005 ; Boix-Mansilla 2006b ; Spaapen, Dijstelbloem and Wamelink 2007 ) and utilitarian values ( Hemlin 2006 ; Ernø-Kjølhede and Hansson 2011 ; Bornmann 2013 ) as essential aspects of quality in TDR. Common themes include: (1) the importance of formative and process-oriented evaluation ( Bergmann et al. 2005 ; Hemlin 2006 ; Stige 2009 ); (2) emphasis on the evaluation process itself (not just criteria or outcomes) and reflexive dialog for learning ( Bergmann et al. 2005 ; Boix-Mansilla 2006b ; Klein 2008 ; Oberg 2008 ; Stige, Malterud and Midtgarden 2009 ; Aagaard-Hansen and Svedin 2009 ; Carew and Wickson 2010 ; Huutoniemi 2010 ); (3) the need for peers who are experienced and knowledgeable about TDR for fair peer review ( Boix-Mansilla 2006a , b ; Klein 2006 ; Hemlin 2006 ; Scott 2007 ; Aagaard-Hansen and Svedin 2009 ); (4) the inclusion of stakeholders in the evaluation process ( Bergmann et al. 2005 ; Scott 2007 ; Andréen 2010 ); and (5) the importance of evaluations that are built in-context ( Defila and Di Giulio 1999 ; Feller 2006 ; Spaapen, Dijstelbloem and Wamelink 2007 ; de Jong et al. 2011 ).

While each reviewed approach offers helpful insights, none adequately fulfills the need for a broad and adaptable framework for assessing TDR quality. Wickson and Carew ( 2014 : 257) highlight the need for quality criteria that achieve balance between ‘comprehensiveness and over-prescription’: ‘any emerging quality criteria need to be concrete enough to provide real guidance but flexible enough to adapt to the specificities of varying contexts’. Based on our experience, such a framework should be:

Comprehensive: It should accommodate the main aspects of TDR, as identified in the review.

Time/phase adaptable: It should be applicable across the project cycle.

Scalable: It should be useful for projects of different scales.

Versatile: It should be useful to researchers and collaborators as a guide to research design and management, and to internal and external reviews and assessors.

Comparable: It should allow comparison of quality between and across projects/programs.

Reflexive: It should encourage and facilitate self-reflection and adaptation based on ongoing learning.

In this section, we synthesize the key principles and criteria of quality in TDR that were identified in the reviewed literature. Principles are the essential elements of high-quality TDR. Criteria are the conditions that need to be met in order to achieve a principle. We conclude by providing a framework for the evaluation of quality in TDR ( Table 3 ) and guidance for its application.

Transdisciplinary research quality assessment framework

a Research problems are the particular topic, area of concern, question to be addressed, challenge, opportunity, or focus of the research activity. Research problems are related to the societal problem but take on a specific focus, or framing, within a societal problem.

b Problem context refers to the social and environmental setting(s) that gives rise to the research problem, including aspects of: location; culture; scale in time and space; social, political, economic, and ecological/environmental conditions; resources and societal capacity available; uncertainty, complexity, and novelty associated with the societal problem; and the extent of agency that is held by stakeholders ( Carew and Wickson 2010 ).

c Words such as ‘appropriate’, ‘suitable’, and ‘adequate’ are used deliberately to allow for quality criteria to be flexible and specific enough to the needs of individual research projects ( Oberg 2008 ).

d Research process refers to the series of decisions made and actions taken throughout the entire duration of the research project and encompassing all aspects of the research project.

e Reflexivity refers to an iterative process of formative, critical reflection on the important interactions and relationships between a research project’s process, context, and product(s).

f In an ex ante evaluation, ‘evidence of’ would be replaced with ‘potential for’.

There is a strong trend in the reviewed articles to recognize the need for appropriate measures of scientific quality (usually adapted from disciplinary antecedants), but also to consider broader sets of criteria regarding the societal significance and applicability of research, and the need for engagement and representation of stakeholder values and knowledge. Cash et al. (2002) nicely conceptualize three key aspects of effective sustainability research as: salience (or relevance), credibility, and legitimacy. These are presented as necessary attributes for research to successfully produce transferable, useful information that can cross boundaries between disciplines, across scales, and between science and society. Many of the papers also refer to the principle that high-quality TDR should be effective in terms of contributing to the solution of problems. These four principles are discussed in the following sections.

4.1.1 Relevance

Relevance is the importance, significance, and usefulness of the research project's objectives, process, and findings to the problem context and to society. This includes the appropriateness of the timing of the research, the questions being asked, the outputs, and the scale of the research in relation to the societal problem being addressed. Good-quality TDR addresses important social/environmental problems and produces knowledge that is useful for decision making and problem solving ( Cash et al. 2002 ; Klein 2006 ). As Erno-Kjolhede and Hansson ( 2011 : 140) explain, quality ‘is first and foremost about creating results that are applicable and relevant for the users of the research’. Researchers must demonstrate an in-depth knowledge of and ongoing engagement with the problem context in which their research takes place ( Wickson, Carew and Russell 2006 ; Stige, Malterud and Midtgarden 2009 ; Mitchell and Willets 2009 ). From the early steps of problem formulation and research design through to the appropriate and effective communication of research findings, the applicability and relevance of the research to the societal problem must be explicitly stated and incorporated.

4.1.2 Credibility

Credibility refers to whether or not the research findings are robust and the knowledge produced is scientifically trustworthy. This includes clear demonstration that the data are adequate, with well-presented methods and logical interpretations of findings. High-quality research is authoritative, transparent, defensible, believable, and rigorous. This is the traditional purview of science, and traditional disciplinary criteria can be applied in TDR evaluation to an extent. Additional and modified criteria are needed to address the integration of epistemologies and methodologies and the development of novel methods through collaboration, the broad preparation and competencies required to carry out the research, and the need for reflection and adaptation when operating in complex systems. Having researchers actively engaged in the problem context and including extra-scientific actors as part of the research process helps to achieve relevance and legitimacy of the research; it also adds complexity and heightened requirements of transparency, reflection, and reflexivity to ensure objective, credible research is carried out.

Active reflexivity is a criterion of credibility of TDR that may seem to contradict more rigid disciplinary methodological traditions ( Carew and Wickson 2010 ). Practitioners of TDR recognize that credible work in these problem-oriented fields requires active reflexivity, epitomized by ongoing learning, flexibility, and adaptation to ensure the research approach and objectives remain relevant and fit-to-purpose ( Lincoln 1995 ; Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Mitchell and Willets 2009 ; Andreén 2010 ; Carew and Wickson 2010 ; Wickson and Carew 2014 ). Changes made during the research process must be justified and reported transparently and explicitly to maintain credibility.

The need for critical reflection on potential bias and limitations becomes more important to maintain credibility of research-in-context ( Lincoln 1995 ; Bergmann et al. 2005 ; Mitchell and Willets 2009 ; Stige, Malterud and Midtgarden 2009 ). Transdisciplinary researchers must ensure they maintain a high level of objectivity and transparency while actively engaging in the problem context. This point demonstrates the fine balance between different aspects of quality, in this case relevance and credibility, and the need to be aware of tensions and to seek complementarities ( Cash et al. 2002 ).

4.1.3 Legitimacy

Legitimacy refers to whether the research process is perceived as fair and ethical by end-users. In other words, is it acceptable and trustworthy in the eyes of those who will use it? This requires the appropriate inclusion and consideration of diverse values, interests, and the ethical and fair representation of all involved. Legitimacy may be achieved in part through the genuine inclusion of stakeholders in the research process. Whereas credibility refers to technical aspects of sound research, legitimacy deals with sociopolitical aspects of the knowledge production process and products of research. Do stakeholders trust the researchers and the research process, including funding sources and other sources of potential bias? Do they feel represented? Legitimate TDR ‘considers appropriate values, concerns, and perspectives of different actors’ ( Cash et al. 2002 : 2) and incorporates these perspectives into the research process through collaboration and mutual learning ( Bergmann et al. 2005 ; Chataway, Smith and Wield 2007 ; Andrén 2010 ; Huutoneimi 2010 ). A fair and ethical process is important to uphold standards of quality in all research. However, there are additional considerations that are unique to TDR.

Because TDR happens in-context and often in collaboration with societal actors, the disclosure of researcher perspective and a transparent statement of all partnerships, financing, and collaboration is vital to ensure an unbiased research process ( Lincoln 1995 ; Defila and Di Giulio 1999 ; Boaz and Ashby 2003 ; Barker and Pistrang 2005 ; Bergmann et al. 2005 ). The disclosure of perspective has both internal and external aspects, on one hand ensuring the researchers themselves explicitly reflect on and account for their own position, potential sources of bias, and limitations throughout the process, and on the other hand making the process transparent to those external to the research group who can then judge the legitimacy based on their perspective of fairness ( Cash et al. 2002 ).

TDR includes the engagement of societal actors along a continuum of participation from consultation to co-creation of knowledge ( Brandt et al. 2013 ). Regardless of the depth of participation, all processes that engage societal actors must ensure that inclusion/engagement is genuine, roles are explicit, and processes for effective and fair collaboration are present ( Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Hellstrom 2012 ). Important considerations include: the accurate representation of those involved; explicit and agreed-upon roles and contributions of actors; and adequate planning and procedures to ensure all values, perspectives, and contexts are adequately and appropriately incorporated. Mitchell and Willets (2009) consider cultural competence as a key criterion that can support researchers in navigating diverse epistemological perspectives. This is similar to what Morrow terms ‘social validity’, a criterion that asks researchers to be responsive to and critically aware of the diversity of perspectives and cultures influenced by their research. Several authors highlight that in order to develop this critical awareness of the diversity of cultural paradigms that operate within a problem situation, researchers should practice responsive, critical, and/or communal reflection ( Bergmann et al. 2005 ; Wickson, Carew and Russell 2006 ; Mitchell and Willets 2009 ; Carew and Wickson 2010 ). Reflection and adaptation are important quality criteria that cut across multiple principles and facilitate learning throughout the process, which is a key foundation to TD inquiry.

4.1.4 Effectiveness

We define effective research as research that contributes to positive change in the social, economic, and/or environmental problem context. Transdisciplinary inquiry is rooted in the objective of solving real-word problems ( Klein 2008 ; Carew and Wickson 2010 ) and must have the potential to ( ex ante ) or actually ( ex post ) make a difference if it is to be considered of high quality ( Erno-Kjolhede and Hansson 2011 ). Potential research effectiveness can be indicated and assessed at the proposal stage and during the research process through: a clear and stated intention to address and contribute to a societal problem, the establishment of the research process and objectives in relation to the problem context, and the continuous reflection on the usefulness of the research findings and products to the problem ( Bergmann et al. 2005 ; Lahtinen et al. 2005 ; de Jong et al. 2011 ).

Assessing research effectiveness ex post remains a major challenge, especially in complex transdisciplinary approaches. Conventional and widely used measures of ‘scientific impact’ count outputs such as journal articles and other publications and citations of those outputs (e.g. H index; i10 index). While these are useful indicators of scholarly influence, they are insufficient and inappropriate measures of research effectiveness where research aims to contribute to social learning and change. We need to also (or alternatively) focus on other kinds of research and scholarship outputs and outcomes and the social, economic, and environmental impacts that may result.

For many authors, contributing to learning and building of societal capacity are central goals of TDR ( Defila and Di Giulio 1999 ; Spaapen, Dijstelbloem and Wamelink 2007 ; Carew and Wickson 2010 ; Erno-Kjolhede and Hansson 2011 ; Hellstrom 2011 ), and so are considered part of TDR effectiveness. Learning can be characterized as changes in knowledge, attitudes, or skills and can be assessed directly, or through observed behavioral changes and network and relationship development. Some evaluation methodologies (e.g. Outcome Mapping ( Earl, Carden and Smutylo 2001 )) specifically measure these kinds of changes. Other evaluation methodologies consider the role of research within complex systems and assess effectiveness in terms of contributions to changes in policy and practice and resulting social, economic, and environmental benefits ( ODI 2004 , 2012 ; White and Phillips 2012 ; Mayne et al. 2013 ).

4.2 TDR quality criteria

TDR quality criteria and their definitions (explicit or implicit) were extracted from each article and summarized in an Excel database. These criteria were classified into themes corresponding to the four principles identified above, sorted and refined to develop sets of criteria that are comprehensive, mutually exclusive, and representative of the ideas presented in the reviewed articles. Within each principle, the criteria are organized roughly in the sequence of a typical project cycle (e.g. with research design following problem identification and preceding implementation). Definitions of each criterion were developed to reflect the concepts found in the literature, tested and refined iteratively to improve clarity. Rubric statements were formulated based on the literature and our own experience.

The complete set of principles, criteria, and definitions is presented as the TDR Quality Assessment Framework ( Table 3 ).

4.3 Guidance on the application of the framework

4.3.1 timing.

Most criteria can be applied at each stage of the research process, ex ante , mid term, and ex post , using appropriate interpretations at each stage. Ex ante (i.e. proposal) assessment should focus on a project’s explicitly stated intentions and approaches to address the criteria. Mid-term indicators will focus on the research process and whether or not it is being implemented in a way that will satisfy the criteria. Ex post assessment should consider whether the research has been done appropriately for the purpose and that the desired results have been achieved.

4.3.2 New meanings for familiar terms

Many of the terms used in the framework are extensions of disciplinary criteria and share the same or similar names and perhaps similar but nuanced meaning. The principles and criteria used here extend beyond disciplinary antecedents and include new concepts and understandings that encapsulate the unique characteristics and needs of TDR and allow for evaluation and definition of quality in TDR. This is especially true in the criteria related to credibility. These criteria are analogous to traditional disciplinary criteria, but with much stronger emphasis on grounding in both the scientific and the social/environmental contexts. We urge readers to pay close attention to the definitions provided in Table 3 as well as the detailed descriptions of the principles in Section 4.1.

4.3.3 Using the framework

The TDR quality framework ( Table 3 ) is designed to be used to assess TDR research according to a project’s purpose; i.e. the criteria must be interpreted with respect to the context and goals of an individual research activity. The framework ( Table 3 ) lists the main criteria synthesized from the literature and our experience, organized within the principles of relevance, credibility, legitimacy, and effectiveness. The table presents the criteria within each principle, ordered to approximate a typical process of identifying a research problem and designing and implementing research. We recognize that the actual process in any given project will be iterative and will not necessarily follow this sequence, but this provides a logical flow. A concise definition is provided in the second column to explain each criterion. We then provide a rubric statement in the third column, phrased to be applied when the research has been completed. In most cases, the same statement can be used at the proposal stage with a simple tense change or other minor grammatical revision, except for the criteria relating to effectiveness. As discussed above, assessing effectiveness in terms of outcomes and/or impact requires evaluation research. At the proposal stage, it is only possible to assess potential effectiveness.

Many rubrics offer a set of statements for each criterion that represent progressively higher levels of achievement; the evaluator is asked to select the best match. In practice, this often results in vague and relative statements of merit that are difficult to apply. We have opted to present a single rubric statement in absolute terms for each criterion. The assessor can then rank how well a project satisfies each criterion using a simple three-point Likert scale. If a project fully satisfies a criterion—that is, if there is evidence that the criterion has been addressed in a way that is coherent, explicit, sufficient, and convincing—it should be ranked as a 2 for that criterion. A score of 2 means that the evaluator is persuaded that the project addressed that criterion in an intentional, appropriate, explicit, and thorough way. A score of 1 would be given when there is some evidence that the criterion was considered, but it is lacking completion, intention, and/or is not addressed satisfactorily. For example, a score of 1 would be given when a criterion is explicitly discussed but poorly addressed, or when there is some indication that the criterion has been considered and partially addressed but it has not been treated explicitly, thoroughly, or adequately. A score of 0 indicates that there is no evidence that the criterion was addressed or that it was addressed in a way that was misguided or inappropriate.

It is critical that the evaluation be done in context, keeping in mind the purpose, objectives, and resources of the project, as well as other contextual information, such as the intended purpose of grant funding or relevant partnerships. Each project will be unique in its complexities; what is sufficient or adequate in one criterion for one research project may be insufficient or inappropriate for another. Words such as ‘appropriate’, ‘suitable’, and ‘adequate’ are used deliberately to encourage application of criteria to suit the needs of individual research projects ( Oberg 2008 ). Evaluators must consider the objectives of the research project and the problem context within which it is carried out as the benchmark for evaluation. For example, we tested the framework with RRU masters theses. These are typically small projects with limited scope, carried out by a single researcher. Expectations for ‘effective communication’ or ‘competencies’ or ‘effective collaboration’ are much different in these kinds of projects than in a multi-year, multi-partner CIFOR project. All criteria should be evaluated through the lens of the stated research objectives, research goals, and context.

The systematic review identified relevant articles from a diverse literature that have a strong central focus. Collectively, they highlight the complexity of contemporary social and environmental problems and emphasize that addressing such issues requires combinations of new knowledge and innovation, action, and engagement. Traditional disciplinary research has often failed to provide solutions because it cannot adequately cope with complexity. New forms of research are proliferating, crossing disciplinary and academic boundaries, integrating methodologies, and engaging a broader range of research participants, as a way to make research more relevant and effective. Theoretically, such approaches appear to offer great potential to contribute to transformative change. However, because these approaches are new and because they are multidimensional, complex, and often unique, it has been difficult to know what works, how, and why. In the absence of the kinds of methodological and quality standards that guide disciplinary research, there are no generally agreed criteria for evaluating such research.

Criteria are needed to guide and to help ensure that TDR is of high quality, to inform the teaching and learning of new researchers, and to encourage and support the further development of transdisciplinary approaches. The lack of a standard and broadly applicable framework for the evaluation of quality in TDR is perceived to cause an implicit or explicit devaluation of high-quality TDR or may prevent quality TDR from being done. There is a demonstrated need for an operationalized understanding of quality that addresses the characteristics, contributions, and challenges of TDR. The reviewed articles approach the topic from different perspectives and fields of study, using different terminology for similar concepts, or the same terminology for different concepts, and with unique ways of organizing and categorizing the dimensions and quality criteria. We have synthesized and organized these concepts as key TDR principles and criteria in a TDR Quality Framework, presented as an evaluation rubric. We have tested the framework on a set of masters’ theses and found it to be broadly applicable, usable, and useful for analyzing individual projects and for comparing projects within the set. We anticipate that further testing with a wider range of projects will help further refine and improve the definitions and rubric statements. We found that the three-point Likert scale (0–2) offered sufficient variability for our purposes, and rating is less subjective than with relative rubric statements. It may be possible to increase the rating precision with more points on the scale to increase the sensitivity for comparison purposes, for example in a review of proposals for a particular grant application.

Many of the articles we reviewed emphasize the importance of the evaluation process itself. The formative, developmental role of evaluation in TDR is seen as essential to the goals of mutual learning as well as to ensure that research remains responsive and adaptive to the problem context. In order to adequately evaluate quality in TDR, the process, including who carries out the evaluations, when, and in what manner, must be revised to be suitable to the unique characteristics and objectives of TDR. We offer this review and synthesis, along with a proposed TDR quality evaluation framework, as a contribution to an important conversation. We hope that it will be useful to researchers and research managers to help guide research design, implementation and reporting, and to the community of research organizations, funders, and society at large. As underscored in the literature review, there is a need for an adapted research evaluation process that will help advance problem-oriented research in complex systems, ultimately to improve research effectiveness.

This work was supported by funding from the Canada Research Chairs program. Funding support from the Canadian Social Sciences and Humanities Research Council (SSHRC) and technical support from the Evidence Based Forestry Initiative of the Centre for International Forestry Research (CIFOR), funded by UK DfID are also gratefully acknowledged.

Supplementary data is available here

The authors thank Barbara Livoreil and Stephen Dovers for valuable comments and suggestions on the protocol and Gillian Petrokofsky for her review of the protocol and a draft version of the manuscript. Two anonymous reviewers and the editor provided insightful critique and suggestions in two rounds that have helped to substantially improve the article.

Conflict of interest statement . None declared.

1. ‘Stakeholders’ refers to individuals and groups of societal actors who have an interest in the issue or problem that the research seeks to address.

2. The terms ‘quality’ and ‘excellence’ are often used in the literature with similar meaning. Technically, ‘excellence’ is a relative concept, referring to the superiority of a thing compared to other things of its kind. Quality is an attribute or a set of attributes of a thing. We are interested in what these attributes are or should be in high-quality research. Therefore, the term ‘quality’ is used in this discussion.

3. The terms ‘science’ and ‘research’ are not always clearly distinguished in the literature. We take the position that ‘science’ is a more restrictive term that is properly applied to systematic investigations using the scientific method. ‘Research’ is a broader term for systematic investigations using a range of methods, including but not restricted to the scientific method. We use the term ‘research’ in this broad sense.

Aagaard-Hansen J. Svedin U. ( 2009 ) ‘Quality Issues in Cross-disciplinary Research: Towards a Two-pronged Approach to Evaluation’ , Social Epistemology , 23 / 2 : 165 – 76 . DOI: 10.1080/02691720902992323

Google Scholar

Andrén S. ( 2010 ) ‘A Transdisciplinary, Participatory and Action-Oriented Research Approach: Sounds Nice but What do you Mean?’ [unpublished working paper] Human Ecology Division: Lund University, 1–21. < https://lup.lub.lu.se/search/publication/1744256 >

Australian Research Council (ARC) ( 2012 ) ERA 2012 Evaluation Handbook: Excellence in Research for Australia . Australia : ARC . < http://www.arc.gov.au/pdf/era12/ERA%202012%20Evaluation%20Handbook_final%20for%20web_protected.pdf >

Google Preview

Balsiger P. W. ( 2004 ) ‘Supradisciplinary Research Practices: History, Objectives and Rationale’ , Futures , 36 / 4 : 407 – 21 .

Bantilan M. C. et al.  . ( 2004 ) ‘Dealing with Diversity in Scientific Outputs: Implications for International Research Evaluation’ , Research Evaluation , 13 / 2 : 87 – 93 .

Barker C. Pistrang N. ( 2005 ) ‘Quality Criteria under Methodological Pluralism: Implications for Conducting and Evaluating Research’ , American Journal of Community Psychology , 35 / 3-4 : 201 – 12 .

Bergmann M. et al.  . ( 2005 ) Quality Criteria of Transdisciplinary Research: A Guide for the Formative Evaluation of Research Projects . Central report of Evalunet – Evaluation Network for Transdisciplinary Research. Frankfurt am Main, Germany: Institute for Social-Ecological Research. < http://www.isoe.de/ftp/evalunet_guide.pdf >

Boaz A. Ashby D. ( 2003 ) Fit for Purpose? Assessing Research Quality for Evidence Based Policy and Practice .

Boix-Mansilla V. ( 2006a ) ‘Symptoms of Quality: Assessing Expert Interdisciplinary Work at the Frontier: An Empirical Exploration’ , Research Evaluation , 15 / 1 : 17 – 29 .

Boix-Mansilla V. . ( 2006b ) ‘Conference Report: Quality Assessment in Interdisciplinary Research and Education’ , Research Evaluation , 15 / 1 : 69 – 74 .

Bornmann L. ( 2013 ) ‘What is Societal Impact of Research and How can it be Assessed? A Literature Survey’ , Journal of the American Society for Information Science and Technology , 64 / 2 : 217 – 33 .

Brandt P. et al.  . ( 2013 ) ‘A Review of Transdisciplinary Research in Sustainability Science’ , Ecological Economics , 92 : 1 – 15 .

Cash D. Clark W.C. Alcock F. Dickson N. M. Eckley N. Jäger J . ( 2002 ) Salience, Credibility, Legitimacy and Boundaries: Linking Research, Assessment and Decision Making (November 2002). KSG Working Papers Series RWP02-046. Available at SSRN: http://ssrn.com/abstract=372280 .

Carew A. L. Wickson F. ( 2010 ) ‘The TD Wheel: A Heuristic to Shape, Support and Evaluate Transdisciplinary Research’ , Futures , 42 / 10 : 1146 – 55 .

Collaboration for Environmental Evidence (CEE) . ( 2013 ) Guidelines for Systematic Review and Evidence Synthesis in Environmental Management . Version 4.2. Environmental Evidence < www.environmentalevidence.org/Documents/Guidelines/Guidelines4.2.pdf >

Chandler J. ( 2014 ) Methods Research and Review Development Framework: Policy, Structure, and Process . < http://methods.cochrane.org/projects-developments/research >

Chataway J. Smith J. Wield D. ( 2007 ) ‘Shaping Scientific Excellence in Agricultural Research’ , International Journal of Biotechnology 9 / 2 : 172 – 87 .

Clark W. C. Dickson N. ( 2003 ) ‘Sustainability Science: The Emerging Research Program’ , PNAS 100 / 14 : 8059 – 61 .

Consultative Group on International Agricultural Research (CGIAR) ( 2011 ) A Strategy and Results Framework for the CGIAR . < http://library.cgiar.org/bitstream/handle/10947/2608/Strategy_and_Results_Framework.pdf?sequence=4 >

Cloete N. ( 1997 ) ‘Quality: Conceptions, Contestations and Comments’, African Regional Consultation Preparatory to the World Conference on Higher Education , Dakar, Senegal, 1-4 April 1997 .

Defila R. DiGiulio A. ( 1999 ) ‘Evaluating Transdisciplinary Research,’ Panorama: Swiss National Science Foundation Newsletter , 1 : 4 – 27 . < www.ikaoe.unibe.ch/forschung/ip/Specialissue.Pano.1.99.pdf >

Donovan C. ( 2008 ) ‘The Australian Research Quality Framework: A Live Experiment in Capturing the Social, Economic, Environmental, and Cultural Returns of Publicly Funded Research. Reforming the Evaluation of Research’ , New Directions for Evaluation , 118 : 47 – 60 .

Earl S. Carden F. Smutylo T. ( 2001 ) Outcome Mapping. Building Learning and Reflection into Development Programs . Ottawa, ON : International Development Research Center .

Ernø-Kjølhede E. Hansson F. ( 2011 ) ‘Measuring Research Performance during a Changing Relationship between Science and Society’ , Research Evaluation , 20 / 2 : 130 – 42 .

Feller I. ( 2006 ) ‘Assessing Quality: Multiple Actors, Multiple Settings, Multiple Criteria: Issues in Assessing Interdisciplinary Research’ , Research Evaluation 15 / 1 : 5 – 15 .

Gaziulusoy A. İ. Boyle C. ( 2013 ) ‘Proposing a Heuristic Reflective Tool for Reviewing Literature in Transdisciplinary Research for Sustainability’ , Journal of Cleaner Production , 48 : 139 – 47 .

Gibbons M. et al.  . ( 1994 ) The New Production of Knowledge: The Dynamics of Science and Research in Contemporary Societies . London : Sage Publications .

Hellstrom T. ( 2011 ) ‘Homing in on Excellence: Dimensions of Appraisal in Center of Excellence Program Evaluations’ , Evaluation , 17 / 2 : 117 – 31 .

Hellstrom T. . ( 2012 ) ‘Epistemic Capacity in Research Environments: A Framework for Process Evaluation’ , Prometheus , 30 / 4 : 395 – 409 .

Hemlin S. Rasmussen S. B . ( 2006 ) ‘The Shift in Academic Quality Control’ , Science, Technology & Human Values , 31 / 2 : 173 – 98 .

Hessels L. K. Van Lente H. ( 2008 ) ‘Re-thinking New Knowledge Production: A Literature Review and a Research Agenda’ , Research Policy , 37 / 4 , 740 – 60 .

Huutoniemi K. ( 2010 ) ‘Evaluating Interdisciplinary Research’ , in Frodeman R. Klein J. T. Mitcham C. (eds) The Oxford Handbook of Interdisciplinarity , pp. 309 – 20 . Oxford : Oxford University Press .

de Jong S. P. L. et al.  . ( 2011 ) ‘Evaluation of Research in Context: An Approach and Two Cases’ , Research Evaluation , 20 / 1 : 61 – 72 .

Jahn T. Keil F. ( 2015 ) ‘An Actor-Specific Guideline for Quality Assurance in Transdisciplinary Research’ , Futures , 65 : 195 – 208 .

Kates R. ( 2000 ) ‘Sustainability Science’ , World Academies Conference Transition to Sustainability in the 21st Century 5/18/00 , Tokyo, Japan .

Klein J. T . ( 2006 ) ‘Afterword: The Emergent Literature on Interdisciplinary and Transdisciplinary Research Evaluation’ , Research Evaluation , 15 / 1 : 75 – 80 .

Klein J. T . ( 2008 ) ‘Evaluation of Interdisciplinary and Transdisciplinary Research: A Literature Review’ , American Journal of Preventive Medicine , 35 / 2 Supplment S116–23. DOI: 10.1016/j.amepre.2008.05.010

Royal Netherlands Academy of Arts and Sciences, Association of Universities in the Netherlands, Netherlands Organization for Scientific Research (KNAW) . ( 2009 ) Standard Evaluation Protocol 2009-2015: Protocol for Research Assessment in the Netherlands . Netherlands : KNAW . < www.knaw.nl/sep >

Komiyama H. Takeuchi K. ( 2006 ) ‘Sustainability Science: Building a New Discipline’ , Sustainability Science , 1 : 1 – 6 .

Lahtinen E. et al.  . ( 2005 ) ‘The Development of Quality Criteria For Research: A Finnish approach’ , Health Promotion International , 20 / 3 : 306 – 15 .

Lang D. J. et al.  . ( 2012 ) ‘Transdisciplinary Research in Sustainability Science: Practice , Principles , and Challenges’, Sustainability Science , 7 / S1 : 25 – 43 .

Lincoln Y. S . ( 1995 ) ‘Emerging Criteria for Quality in Qualitative and Interpretive Research’ , Qualitative Inquiry , 1 / 3 : 275 – 89 .

Mayne J. Stern E. ( 2013 ) Impact Evaluation of Natural Resource Management Research Programs: A Broader View . Australian Centre for International Agricultural Research, Canberra .

Meyrick J . ( 2006 ) ‘What is Good Qualitative Research? A First Step Towards a Comprehensive Approach to Judging Rigour/Quality’ , Journal of Health Psychology , 11 / 5 : 799 – 808 .

Mitchell C. A. Willetts J. R. ( 2009 ) ‘Quality Criteria for Inter and Trans - Disciplinary Doctoral Research Outcomes’ , in Prepared for ALTC Fellowship: Zen and the Art of Transdisciplinary Postgraduate Studies ., Sydney : Institute for Sustainable Futures, University of Technology .

Morrow S. L . ( 2005 ) ‘Quality and Trustworthiness in Qualitative Research in Counseling Psychology’ , Journal of Counseling Psychology , 52 / 2 : 250 – 60 .

Nowotny H. Scott P. Gibbons M. ( 2001 ) Re-Thinking Science . Cambridge : Polity .

Nowotny H. Scott P. Gibbons M. . ( 2003 ) ‘‘Mode 2’ Revisited: The New Production of Knowledge’ , Minerva , 41 : 179 – 94 .

Öberg G . ( 2008 ) ‘Facilitating Interdisciplinary Work: Using Quality Assessment to Create Common Ground’ , Higher Education , 57 / 4 : 405 – 15 .

Ozga J . ( 2007 ) ‘Co - production of Quality in the Applied Education Research Scheme’ , Research Papers in Education , 22 / 2 : 169 – 81 .

Ozga J . ( 2008 ) ‘Governing Knowledge: research steering and research quality’ , European Educational Research Journal , 7 / 3 : 261 – 272 .

OECD ( 2012 ) Frascati Manual 6th ed. < http://www.oecd.org/innovation/inno/frascatimanualproposedstandardpracticeforsurveysonresearchandexperimentaldevelopment6thedition >

Overseas Development Institute (ODI) ( 2004 ) ‘Bridging Research and Policy in International Development: An Analytical and Practical Framework’, ODI Briefing Paper. < http://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/198.pdf >

Overseas Development Institute (ODI) . ( 2012 ) RAPID Outcome Assessment Guide . < http://www.odi.org/sites/odi.org.uk/files/odi-assets/publications-opinion-files/7815.pdf >

Pullin A. S. Stewart G. B. ( 2006 ) ‘Guidelines for Systematic Review in Conservation and Environmental Management’ , Conservation Biology , 20 / 6 : 1647 – 56 .

Research Excellence Framework (REF) . ( 2011 ) Research Excellence Framework 2014: Assessment Framework and Guidance on Submissions. Reference REF 02.2011. UK: REF. < http://www.ref.ac.uk/pubs/2011-02/ >

Scott A . ( 2007 ) ‘Peer Review and the Relevance of Science’ , Futures , 39 / 7 : 827 – 45 .

Spaapen J. Dijstelbloem H. Wamelink F. ( 2007 ) Evaluating Research in Context: A Method for Comprehensive Assessment . Netherlands: Consultative Committee of Sector Councils for Research and Development. < http://www.qs.univie.ac.at/fileadmin/user_upload/qualitaetssicherung/PDF/Weitere_Aktivit%C3%A4ten/Eric.pdf >

Spaapen J. Van Drooge L. ( 2011 ) ‘Introducing “Productive Interactions” in Social Impact Assessment’ , Research Evaluation , 20 : 211 – 18 .

Stige B. Malterud K. Midtgarden T. ( 2009 ) ‘Toward an Agenda for Evaluation of Qualitative Research’ , Qualitative Health Research , 19 / 10 : 1504 – 16 .

td-net ( 2014 ) td-net. < www.transdisciplinarity.ch/e/Bibliography/new.php >

Tertiary Education Commission (TEC) . ( 2012 ) Performance-based Research Fund: Quality Evaluation Guidelines 2012. New Zealand: TEC. < http://www.tec.govt.nz/Documents/Publications/PBRF-Quality-Evaluation-Guidelines-2012.pdf >

Tijssen R. J. W. ( 2003 ) ‘Quality Assurance: Scoreboards of Research Excellence’ , Research Evaluation , 12 : 91 – 103 .

White H. Phillips D. ( 2012 ) ‘Addressing Attribution of Cause and Effect in Small n Impact Evaluations: Towards an Integrated Framework’. Working Paper 15. New Delhi: International Initiative for Impact Evaluation .

Wickson F. Carew A. ( 2014 ) ‘Quality Criteria and Indicators for Responsible Research and Innovation: Learning from Transdisciplinarity’ , Journal of Responsible Innovation , 1 / 3 : 254 – 73 .

Wickson F. Carew A. Russell A. W. ( 2006 ) ‘Transdisciplinary Research: Characteristics, Quandaries and Quality,’ Futures , 38 / 9 : 1046 – 59

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1471-5449
  • Print ISSN 0958-2029
  • Copyright © 2024 Oxford University Press
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.328(7430); 2004 Jan 3

Assessing the quality of research

Paul glasziou.

1 Department of Primary Health Care, University of Oxford, Oxford OX3 7LF

Jan Vandenbroucke

2 Leiden University Medical School, Leiden 9600 RC, Netherlands

Iain Chalmers

3 James Lind Initiative, Oxford OX2 7LG

Associated Data

Short abstract.

Inflexible use of evidence hierarchies confuses practitioners and irritates researchers. So how can we improve the way we assess research?

The widespread use of hierarchies of evidence that grade research studies according to their quality has helped to raise awareness that some forms of evidence are more trustworthy than others. This is clearly desirable. However, the simplifications involved in creating and applying hierarchies have also led to misconceptions and abuses. In particular, criteria designed to guide inferences about the main effects of treatment have been uncritically applied to questions about aetiology, diagnosis, prognosis, or adverse effects. So should we assess evidence the way Michelin guides assess hotels and restaurants? We believe five issues should be considered in any revision or alternative approach to helping practitioners to find reliable answers to important clinical questions.

Different types of question require different types of evidence

Ever since two American social scientists introduced the concept in the early 1960s, 1 hierarchies have been used almost exclusively to determine the effects of interventions. This initial focus was appropriate but has also engendered confusion. Although interventions are central to clinical decision making, practice relies on answers to a wide variety of types of clinical questions, not just the effect of interventions. 2 Other hierarchies might be necessary to answer questions about aetiology, diagnosis, disease frequency, prognosis, and adverse effects. 3 Thus, although a systematic review of randomised trials would be appropriate for answering questions about the main effects of a treatment, it would be ludicrous to attempt to use it to ascertain the relative accuracy of computerised versus human reading of cervical smears, the natural course of prion diseases in humans, the effect of carriership of a mutation on the risk of venous thrombosis, or the rate of vaginal adenocarcinoma in the daughters of pregnant women given diethylstilboesterol. 4 ​ 4

An external file that holds a picture, illustration, etc.
Object name is glap95695.f1.jpg

To answer their everyday questions, practitioners need to understand the “indications and contraindications” for different types of research evidence. 5 Randomised trials can give good estimates of treatment effects but poor estimates of overall prognosis; comprehensive non-randomised inception cohort studies with prolonged follow up, however, might provide the reverse.

Systematic reviews of research are always preferred

With rare exceptions, no study, whatever the type, should be interpreted in isolation. Systematic reviews are required of the best available type of study for answering the clinical question posed. 6 A systematic review does not necessarily involve quantitative pooling in a meta-analysis.

Although case reports are a less than perfect source of evidence, they are important in alerting us to potential rare harms or benefits of an effective treatment. 7 Standardised reporting is certainly needed, 8 but too few people know about a study showing that more than half of suspected adverse drug reactions were confirmed by subsequent, more detailed research. 9 For reliable evidence on rare harms, therefore, we need a systematic review of case reports rather than a haphazard selection of them. 10 Qualitative studies can also be incorporated in reviews—for example, the systematic compilation of the reasons for non-compliance with hip protectors derived from qualitative research. 11

Level alone should not be used to grade evidence

The first substantial use of a hierarchy of evidence to grade health research was by the Canadian Task Force on the Preventive Health Examination. 12 Although such systems are preferable to ignoring research evidence or failing to provide justification for selecting particular research reports to support recommendations, they have three big disadvantages. Firstly, the definitions of the levels vary within hierarchies so that level 2 will mean different things to different readers. Secondly, novel or hybrid research designs are not accommodated in these hierarchies—for example, reanalysis of individual data from several studies or case crossover studies within cohorts. Thirdly, and perhaps most importantly, hierarchies can lead to anomalous rankings. For example, a statement about one intervention may be graded level 1 on the basis of a systematic review of a few, small, poor quality randomised trials, whereas a statement about an alternative intervention may be graded level 2 on the basis of one large, well conducted, multicentre, randomised trial.

This ranking problem arises because of the objective of collapsing the multiple dimensions of quality (design, conduct, size, relevance, etc) into a single grade. For example, randomisation is a key methodological feature in research into interventions, 13 but reducing the quality of evidence to a single level reflecting proper randomisation ignores other important dimensions of randomised clinical trials. These might include:

  • Other design elements, such as the validity of measurements and blinding of outcome assessments
  • Quality of the conduct of the study, such as loss to follow up and success of blinding
  • Absolute and relative size of any effects seen
  • Confidence intervals around the point estimates of effects.

None of the current hierarchies of evidence includes all these dimensions, and recent methodological research suggests that it may be difficult for them to do so. 14 Moreover, some dimensions are more important for some clinical problems and outcomes than for others, which necessitates a tailored approach to appraising evidence. 15 Thus, for important recommendations, it may be preferable to present a brief summary of the central evidence (such as “double-blind randomised controlled trials with a high degree of follow up over three years showed that...”), coupled with a brief appraisal of why particular quality dimensions are important. This broader approach to the assessment of evidence applies not only to randomised trials but also to observational studies. In the final recommendations, there will also be a role for other types of scientific evidence—for example, on aetiological and pathophysiological mechanisms—because concordance between theoretical models and the results of empirical investigations will increase confidence in the causal inferences. 16 , 17

What to do when systematic reviews are not available

Although hierarchies can be misleading as a grading system, they can help practitioners find the best relevant evidence among a plethora of studies of diverse quality. For example, to answer a therapeutic question, the hierarchy would suggest first looking for a systematic review of randomised controlled trials. However, only a fraction of the hundreds of thousands of reports of randomised trials have been considered for possible inclusion in systematic reviews. 18 So when there is no existing review, a busy clinician might next try to identify the best of several randomised trials. If the search fails to identify any randomised trials, non-randomised cohort studies might be informative. For non-therapeutic questions, however, search strategies should accommodate the need for observational designs that answer questions about aetiology, prognosis, or adverse effects. 19 Whatever evidence is found, this should be clearly described rather than simply assigned to a level. Such considerations have led the authors of the BMJ 's Clinical Evidence to use a hierarchy for finding evidence but to forgo grading evidence into levels. Instead, they make explicit the type of evidence on which their conclusions are based.

Balanced assessments should draw on a variety of types of research

For interventions, the best available evidence for each outcome of potential importance to patients is needed. 20 Often this will require systematic reviews of several different types of study. As an example, consider a woman interested in oral contraceptives. Evidence is available from controlled trials showing their contraceptive effectiveness. Although contraception is the main intended beneficial effect, some women will also be interested in the effects of oral contraceptives on acne or dysmenorrhoea. These may have been assessed in short term randomised controlled trials comparing different contraceptives. Any beneficial intended effect needs to be weighed against possible harms, such as increases in thromboembolism and breast cancer. The best evidence for such potential harms is likely to come from non-randomised cohort studies or case-control studies. For example, fears about negative consequences on fertility after long term use of oral contraceptives were allayed by such non-randomised studies. The figure gives an example of how all this information might be amalgamated into a balance sheet. 21 , 22

An external file that holds a picture, illustration, etc.
Object name is glap95695.f2.jpg

Example of possible evidence table for short and long term effects of oral contraceptives. (Absolute effects will vary with age and other risk factors such as smoking and blood pressure. RCT = randomised controlled trial)

Sometimes, rare, dramatic adverse effects detected with case reports or case control studies prompt further investigation and follow up of existing randomised cohorts to detect related but less severe adverse effects. For example, the case reports and case-control studies showing that intrauterine exposure to diethylstilboestrol could cause vaginal adenocarcinoma led to further investigation and follow up of the mothers and children (male as well as female) who had participated in the relevant randomised trials. These investigations showed several less serious but more frequent adverse effects of diethylstilboestrol that would have otherwise been difficult to detect. 4

Conclusions

Given the flaws in evidence hierarchies that we have described, how should we proceed? We suggest that there are two broad options: firstly, to extend, improve, and standardise current evidence hierarchies 22 ; and, secondly, to abolish the notion of evidence hierarchies and levels of evidence, and concentrate instead on teaching practitioners general principles of research so that they can use these principles to appraise the quality and relevance of particular studies. 5

We have been unable to reach a consensus on which of these approaches is likely to serve the current needs of practitioners more effectively. Practitioners who seek immediate answers cannot embark on a systematic review every time a new question arises in their practice. Clinical guidelines are increasingly prepared professionally—for example, by organisations of general practitioners and of specialist physicians or the NHS National Institute for Clinical Excellence—and this work draws on the results of systematic reviews of research evidence. Such organisations might find it useful to reconsider their approach to evidence and broaden the type of problems that they examine, especially when they need to balance risks and benefits. Most importantly, however, the practitioners who use their products should understand the approach used and be able to judge easily whether a review or a guideline has been prepared reliably.

Evidence hierarchies with the randomised trial at the apex have been pivotal in the ascendancy of numerical reasoning in medicine over the past quarter century. 17 Now that this principle is widely appreciated, however, we believe that it is time to broaden the scope by which evidence is assessed, so that the principles of other types of research, addressing questions on aetiology, diagnosis, prognosis, and unexpected effects of treatment, will become equally widely understood. Indeed, maybe we do have something to learn from Michelin guides: they have separate grading systems for hotels and restaurants, provide the details of the several quality dimensions behind each star rating, and add a qualitative commentary ( www.viamichelin.com ).

Summary points

Different types of research are needed to answer different types of clinical questions

Irrespective of the type of research, systematic reviews are necessary

Adequate grading of quality of evidence goes beyond the categorisation of research design

Risk-benefit assessments should draw on a variety of types of research

Clinicians need efficient search strategies for identifying reliable clinical research

Supplementary Material

We thank Andy Oxman and Mike Rawlins for helpful suggestions.

Contributors and sources: As a general practitioner, PG uses the his own and others' evidence assessments, and as a teacher of evidence based medicine helps others find and appraise research. JV is an internist and epidemiologist by training; he has extensively collaborated in clinical research, which made him strongly aware of the diverse types of evidence that clinicians use and need. IC's interest in these issues arose from witnessing the harm done to patients from eminence based medicine.

Competing interests: None declared.

  • Open access
  • Published: 19 December 2011

Quality assurance of qualitative research: a review of the discourse

  • Joanna Reynolds 1 ,
  • James Kizito 2 ,
  • Nkoli Ezumah 3 ,
  • Peter Mangesho 4 ,
  • Elizabeth Allen 5 &
  • Clare Chandler 1  

Health Research Policy and Systems volume  9 , Article number:  43 ( 2011 ) Cite this article

43k Accesses

55 Citations

5 Altmetric

Metrics details

Increasing demand for qualitative research within global health has emerged alongside increasing demand for demonstration of quality of research, in line with the evidence-based model of medicine. In quantitative health sciences research, in particular clinical trials, there exist clear and widely-recognised guidelines for conducting quality assurance of research. However, no comparable guidelines exist for qualitative research and although there are long-standing debates on what constitutes 'quality' in qualitative research, the concept of 'quality assurance' has not been explored widely. In acknowledgement of this gap, we sought to review discourses around quality assurance of qualitative research, as a first step towards developing guidance.

A range of databases, journals and grey literature sources were searched, and papers were included if they explicitly addressed quality assurance within a qualitative paradigm. A meta-narrative approach was used to review and synthesise the literature.

Among the 37 papers included in the review, two dominant narratives were interpreted from the literature, reflecting contrasting approaches to quality assurance. The first focuses on demonstrating quality within research outputs; the second focuses on principles for quality practice throughout the research process. The second narrative appears to offer an approach to quality assurance that befits the values of qualitative research, emphasising the need to consider quality throughout the research process.

Conclusions

The paper identifies the strengths of the approaches represented in each narrative and recommend these are brought together in the development of a flexible framework to help qualitative researchers to define, apply and demonstrate principles of quality in their research.

Peer Review reports

The global health movement is increasingly calling for qualitative research to accompany its projects and programmes [ 1 ]. This demand, and the funding that goes with it, has led to critical debates among qualitative researchers, particularly over their role as applied or theoretical researchers [ 2 ]. An additional challenge emanating from this demand is to justify research findings and methodological rigour in terms that are meaningful and useful to global public health practitioners. A key area that has grown in quantitative health research has been in quality assurance activities, following the social movement towards evidence-based medicine and global public health [ 3 ]. Through the eyes of this movement, the quality of research affects not only the trajectory of academic disciplines but also local and global health policies. Clinical trials researchers and managers have led much of health research into an era of structured standardised procedures that demarcate and assure quality [ 4 , 5 ].

By contrast, disciplines using qualitative research methods have, to date, engaged far less frequently with quality assurance as a concept or set of procedures, and no standardised guidance for assuring quality exists. The lack of a unified approach to assuring quality can prove unhelpful for the qualitative researcher [ 6 , 7 ], particularly when working in the global health arena, where research needs both to withstand external scrutiny and provide confidence in interpretation of results by internal collaborators Furthermore, past and existing debates on what constitutes 'good' qualitative research have tended to be centred firmly within social science disciplines such as sociology or anthropology, and as such, their language and content may prove difficult to penetrate for the qualitative researcher operating within a multi-disciplinary, and largely positivist, global health environment.

The authors and colleagues within the ACT Consortium [ 8 ] conduct qualitative research that is mostly rooted in anthropology and sociology, to explore the use of antimalarial medicines and intervention trials around antimalarial drug use, within the global health field. Through this work, within the context of clinical trials following Good Clinical Practice (GCP) guidelines [ 4 ], we have identified a number of challenges relating to the demands for evidence of quality and for quality assurance of qualitative research. The quality assurance procedures available for quantitative research, such as GCP training and auditing, are rooted in a positivist epistemology and are not easily translated to the reflexive, subjective nature of qualitative research and the interpretivist-constructionist epistemological position held by many social scientists, including the authors. Experiences of spatial distance between collaborators and those working in remote study field sites have also raised questions around how best to ensure that a qualitative research study is being conducted to high quality standards when the day-to-day research activity is unobservable by collaborators.

In response to the perceived need for the authors' qualitative studies to maintain and demonstrate quality in research processes and outcomes, we sought to identify existing guidance for quality assurance of qualitative research. In the absence of an established unified approach encapsulated in guidance format, we saw the need to review literature addressing the concept and practice of quality assurance of qualitative research, as a precursor to developing suitable guidance.

In this paper, we examine how quality assurance has been conceptualised and defined within qualitative paradigms. The specific objectives of the review were to, firstly, identify literature that expressly addresses the concept of quality assurance of qualitative research, and secondly, to identify common narratives across the existing discourses of quality assurance.

Search strategy

Keywords were identified from a preliminary review of methodological papers and textbooks on qualitative research, reflecting the concepts of 'quality assurance' and 'qualitative research', and all their relevant synonyms. The pool of keywords was augmented and refined iteratively as the search progressed and as the nature of the body of literature became apparent. Five electronic databases-Academic Search Complete, CINAHL Plus, IBSS, Medline and Web of Science-were searched systematically between October and December 2010, using combinations of the following keywords: "quality assurance", "quality assess*", "quality control*", "quality monitor*", "quality manage*, "audit*", "quality", "valid*", "rigo*r", "trustworth*", "legitima*", "authentic*", "strength", "power", "reliabil*", "accura*","thorough*", "credibil*", "fidelity", "authorit*", "integrity", "value", "worth*", "good*", "excellen*", "qualitative AND (research OR inquiry OR approach* OR method* OR paradigm OR epistemolog* OR study). Grey literature was also searched for using Google, and the key phrases "quality assurance" AND "qualitative research".

Several relevant journals- International Journal of Qualitative Methods, International Journal of Social Research Methodology and Social Science and Medicine - were hand searched for applicable papers using the same keywords. Finally, additional literature, in particular books and book chapters, was identified through snowballing techniques, both backwards by following references of eligible papers and forwards through citation chasing. At the point where no new references were identified from the above techniques, the decision was made to curtail the search and begin reviewing, reflecting the practical and time implications of adopting further search strategies.

Inclusion and exclusion criteria

Inclusion criteria were identified prior to the search, to include:

Methodological discussion papers, books or book chapters addressing qualitative research with explicit focus on issues of assuring quality.

Guidance or training documents (in 'grey literature') addressing quality assurance in qualitative research.

Excluded were:

Publications primarily addressing critical appraisal or evaluation of qualitative research for decision-making, reviews or publication. These topics were considered to be distinct from the activity of quality assurance which occurs before writing up and publication.

Publications focusing only on one or more specific qualitative methods or methodological approaches, for example grounded theory or focus groups; focusing on a single stage of the research process only, for example, data collection; or primarily addressing mixed methods of qualitative and quantitative research. It was agreed by the authors that these method-specific papers would not help inform narratives about the discourse of quality assurance, but may become useful at a later date when developing detailed guidance.

Publications not in the English language.

Review methodology

A meta-narrative approach was chosen for the reviewing and synthesis of the literature. This is a systematic method developed by Greenhalgh et al [ 9 ] to make sense of complex, conflicting and diverse sources of literature, interpreting the over-arching narratives across different research traditions and paradigms [ 9 , 10 ]. Within the meta-narrative approach, literature is mapped in terms of its paradigmatic and philosophical underpinnings, critically appraised and then synthesised by constructing narrative accounts of the contributions made by each perspective to the different dimensions of the topic [ 9 ]. Due to the discursive nature of the literature sought, representing different debates and philosophical traditions, the meta-narrative approach was deemed most appropriate for review and synthesis. A process of evaluating papers according to predefined quality criteria and using methods to minimise bias, as in traditional, Cochrane-style systematic reviewing, was not considered suitable or feasible to achieve the objectives.

Each paper was read twice by JR, summarised and analysed to determine the paper's academic tradition, the debates around quality assurance in qualitative research identified and discussed, the definition(s) used for 'quality' and the values underpinning this, and recommended methods or strategies for assuring quality in qualitative research. At the outset of the review, the authors attempted to identify the epistemological position of each paper and to use as a category by which to interpret conceptualisations of quality assurance. However, it emerged that fewer than half of the publications explicitly presented their epistemology; consequently, epistemological position was not used in the analytical approach to this review, but rather as contextual information for a paper, where present.

Following the appraisal of each paper individually, the literature was then grouped by academic disciplines, by epistemological position (where evident) and by recommendations. This grouping enabled the authors to identify narratives across the literature, and to interpret these in association with the research question. The narratives were developed thematically, following the same process used when conducting thematic analysis of qualitative data. First, the authors identified key idea units in each of the papers, then considered and grouped these ideas into broader cross-cutting themes and constructs. These themes, together with consideration of the epistemologies of the papers, were then used to develop overarching narratives emerging from the reviewed literature.

Search results

The above search strategy yielded 93 papers, of which 37 fulfilled the inclusion and exclusion criteria on reading the abstracts or introductory passages. Of the 56 papers rejected, 26 were papers specifically focused on the critical evaluation or appraisal of qualitative research for decision-making, reviews or publication. The majority of the others were rejected for focusing solely on guidance for a specific qualitative method or single stage of the research process, such as data analysis. Dates of publication ranged from 1994 to 2010. This relatively short and recent timeframe can perhaps be attributed in part to the recent history of publishing qualitative research within the health sciences. It was not until the mid-1990s that leading medical publications such as the British Medical Journal began including qualitative studies [ 11 , 12 ], reflecting an increasing acknowledgement of the value of qualitative research within the predominant evidence-based medicine model [ 13 , 14 ]. Within evidence-based medicine, the emphasis on assessment of quality of research is strong, and as such, may account for the timeframe in which consideration of assuring quality of qualitative research emerged.

Among the 37 papers accepted for inclusion in the review, a majority, 19, were from the fields of health, medical or nursing research [ 6 , 15 – 32 ]. 11 papers represented social science in broad terms, but most commonly from a largely sociological perspective [ 33 – 43 ]. Three papers came from education [ 44 – 46 ], two from communication studies [ 47 , 48 ] and one each from family planning [ 49 ] and social policy [ 50 ]. In terms of the types of literature sourced, there were 27 methodological discussion papers, 3 papers containing methodological discussion with one case study, two editorials, two methodology books, two guidance documents and one paper reporting primary research.

Appraisal of literature

Epistemological positions.

In only 10 publications were the authors' epistemological positions clearly identifiable, either explicitly stated or implied in their argument. Of these publications, five represented a postpositivist-realist position [ 16 , 24 , 39 , 44 , 47 ], and five represented an interpretive-constructionist position [ 17 , 21 , 25 , 34 , 38 ]; see Table 1 for further explanation of the authors' use of these terms. Many of the remaining publications appeared to reflect a postpositivist position due to the way in which authors distinguished qualitative research from positivist, quantitative research, and due to the frequent use of terminology derived from Lincoln and Guba's influential postpositivist criteria for quality [ 51 ].

Two strong narratives across the body of literature were interpreted through the review process, plus one other minor narrative.

Narrative 1: quality as assessment of output

A majority of the publications reviewed (n = 22) demonstrated, explicitly or implicitly, an evaluative perspective of quality assurance, linked to assessment of quality by the presence of certain indicators in the research output [ 15 , 16 , 18 – 22 , 24 , 26 , 27 , 30 , 32 , 36 , 39 , 40 , 42 , 44 , 45 , 47 – 50 ]. These publications were characterized by a 'post-hoc' approach whereby quality assurance was framed in terms of demonstrating that particular standards or criteria have been met in the research process. The publications in this narrative typically offered or referred to sets of criteria for research quality, listing specific methods or techniques deemed to be indicators of quality, and the documenting of which in the research output would be assurance of quality [ 15 , 18 – 20 , 24 , 26 , 32 , 39 , 42 , 47 , 48 , 50 ].

Theoretical perspectives of quality

Many of the authors addressing quality of qualitative research from the output perspective drew upon recent debates that juxtapose qualitative and quantitative research in efforts to increase its credibility as an epistemology. Several of the earlier publications from the 1990s discussed the context of an apparent lack of confidence in quality of qualitative research, particularly against the rising prominence of the evidence-based model within health and medical disciplines [ 16 , 19 , 27 ]. This contextual background links into the debate raised in a number of the publications around whether qualitative research should be judged by the same constructs and criteria of quality as quantitative research.

Many publications engaged directly with the discourse of the post-positivist movement of the mid-1980s and early 1990s to develop criteria of quality unique to qualitative research, recognizing that criteria rooted in the positivist tradition were inappropriate for qualitative work [ 18 , 20 , 24 , 26 , 39 , 44 , 47 , 49 , 50 ]. The post-positivist criteria developed by Lincoln and Guba [ 51 ], based around the construct of 'trustworthiness', were referenced frequently and appeared to be the basis upon which a number of authors made their recommendations for improving quality of qualitative research [ 18 , 26 , 39 , 47 , 50 ]. A number of publications explicitly drew on a post-positivist epistemology in their approach to quality of qualitative research, emphasising the need to ensure research presents a 'valid' and 'credible' account of the social reality [ 16 , 18 , 24 , 39 , 44 , 47 ]. In addition, a multitude of other, often rather abstract, constructs denoting quality were identified across the literature contributing to this narrative, including: 'rigour', 'validity', 'credibility', 'reliability', 'accuracy', 'relevance', 'transferability' 'representativeness', 'dependability' and more.

Methods of quality assurance

Checklists of quality criteria, or markers of 'best practice', were common within this output-focused narrative [ 15 , 16 , 19 , 20 , 24 , 32 , 39 , 42 , 47 , 48 ], with arguments for their value centring on a perceived need for standardised methods by which to determine quality in qualitative research [ 20 , 42 , 50 ]. Typically, these checklists comprised specific techniques and methods, the presence of which in qualitative research, was deemed to be an indicator of quality. Among the publications that did not proffer checklists by which to determine quality, methodological techniques signalling quality were also prominent among the authors' recommendations [ 26 , 40 , 44 , 49 ].

A wide range of techniques were referenced across the literature in this narrative as indicators of quality, but common to most publications were recommendations for the use of triangulation, member (or participant) validation of findings, peer review of findings, deviant or negative case analysis and multiple coders of data. Often these techniques were presented in the publications with little explanation of their theoretical underpinnings or in what circumstances they would be appropriate. Furthermore, there was little discussion within the narrative of the quality of these techniques themselves, and how to ensure they are conducted well.

Recognition of limitations

Two of the more recent papers in this review highlight debates of a more fundamental challenge around defining quality, linked to the challenges in defining the qualitative approach itself [ 26 , 32 ]. These papers, and others, reflect upon the plethora of different terminology and methods used in discourse around quality in qualitative research, as well as the numerous different checklists and criteria available to evaluate quality [ 20 , 32 , 40 , 42 ]. Some critique is offered of the inflexibility of fixed lists of criteria by which to determine quality, with authors emphasizing that standards, and the corresponding techniques by which to achieve them, should be selected in accordance with the epistemological position underpinning each research study [ 18 , 20 , 22 , 30 , 32 , 45 ]. However, in much of the literature there is little guidance around how to determine which constructs of quality are most applicable, and how to select the appropriate techniques for its demonstration.

Narrative 2: assuring quality of process

The second narrative identified was less prominent than the first, with fewer publications addressing the assurance of quality in terms of the research process (n = 13). Among these, several explicitly stated the need to consider how to assure quality through the research process, rather than merely evaluating it at output stage [ 6 , 17 , 31 , 33 , 34 , 37 , 38 , 43 ]. The other papers addressed aspects of good qualitative research or researcher that could be considered process rather than output-oriented, without explicitly defining them as quality assurance methods [ 23 , 25 , 35 , 41 , 46 ]. These included process-based methods such as recommending the use of field diaries for on-going self-reflection [ 25 ], and researcher-centred attributes such as an 'underlying methodological awareness' [ 46 ].

Conceptualisations of quality within the literature contributing to this narrative appeared most commonly to reflect a fundamental, internal set of values or principles indicative of the qualitative approach, rather than theoretical constructs such as 'validity' more traditionally linked to the positivist paradigm. These were often presented as principles to be understood and upheld by the research teams throughout the research process, from designing a study, through data collection to analysis and interpretation [ 17 , 31 , 34 , 37 , 38 ]. Six common principles were identified across the narrative: reflexivity of the researcher's position, assumptions and practice; transparency of decisions made and assumptions held; comprehensiveness of approach to the research question; responsibility towards decision-making acknowledged by the researcher; upholding good ethical practice throughout the research; and a systematic approach to designing, conducting and analyzing a study.

Of the four papers in this narrative which explicitly presented an epistemological position, all represented an interpretive/constructionist approach to qualitative research. These principles reflected the prevailing argument in this narrative that unthinking application of techniques or rules of method does not guarantee quality, but rather an understanding of and engagement with the values unique to qualitative paradigms are crucial for conducting quality research [ 6 , 25 , 31 ].

Critique of output-based approach

Within this process-focused narrative emerged a strong theme of critique of the approach to evaluating quality of qualitative research by the research output [ 6 , 17 , 25 , 31 , 33 , 35 , 37 , 38 , 43 , 46 ]. The principle argument underpinning this theme was that judging quality of research by its output does not help assure or manage quality in the process that leads up to it, but rather, the discussion of what constitutes quality should be maintained throughout the research [ 43 , 46 ]. Furthermore, several papers explicitly criticised the use of set criteria or standards against which to determine the quality of qualitative research [ 6 , 34 , 37 , 46 ], arguing that checklists are inappropriate as they may fail to accommodate the subjectivity and creativity of qualitative inquiry. As such, many studies may appear lacking or of poor quality against such criteria [ 46 ].

A number of authors within this narrative argued that checklists can promote the 'uncritical' use of techniques considered indicative of quality research, such as triangulation. Meeting specific criteria may not be a true indication of the quality of the activities or decisions made in the research process [ 37 , 43 ] and methodological techniques become relied upon as "technical fixes" [ 6 ] which do not automatically lead to good research practice or findings. Authors argued that the promotion of such checklists of may result in diminished researcher responsibility for their role in assuring quality throughout the research process [ 6 , 25 , 35 , 38 ], leading to a lack of methodological awareness, responsiveness and accountability [ 38 ].

Assuring quality of the research process

A number of activities were identified across this narrative to be used along the course of qualitative research to improve or assure its quality. They included the researcher conducting an audit or decision trail to document all decisions and interpretations made at each stage of the research [ 25 , 33 , 37 ]; on-going dynamic discussion of quality issues among the research team [ 46 ]; and developing reflexive field diaries in which researchers can explore and capture their own assumptions and biases [ 17 ]. Beyond these specific suggestions, however, were only broader, more conceptual recommendations without detailed guidance on exactly how they could be enacted. These included encouraging researchers to embrace their responsibility for decision making [ 38 ], understanding and applying a broad understanding of the rationale and assumptions behind qualitative research [ 6 ], and ensuring that the 'attitude' with which research is conducted, as well as the methods, are appropriate [ 37 ].

Although specific recommendations to assure quality were not present in all papers contributing to this narrative, there were some commonalities across each publication in the form of the principles or values that the authors identified as underpinning good quality qualitative research. Some of the publications made explicit reference to principles of good practice that should be appreciated and followed to help assure good quality qualitative research, including transparency, comprehensiveness, reflexivity, ethical practice and being systematic [ 6 , 25 , 35 , 37 ]. Across the other publications in this narrative, these principles emerged from definitions or constructs of quality [ 34 ], from recommendations of strategies to improve the research process [ 17 , 31 , 38 , 43 ], or through critiques of the output-focused approach to evaluating quality [ 33 ].

Minor narrative

Two papers did not contribute coherently to either of the two major narratives, but were similar in their approach towards addressing quality of qualitative research [ 28 , 29 ]. Both were methodological discussion papers which engaged with recent and ongoing debates around quality of qualitative research. The authors drew upon the plurality of views of quality within qualitative research, and linked it to the qualitative struggle to demonstrate credibility alongside quantitative research [ 29 ], and the contested nature of qualitative research itself [ 28 ].

The publications also shared critique of existing discourse around quality of qualitative research, but without presentation of alternative ways to assure it. Both papers critiqued the output-focused approach, conceptualising quality in terms of the demonstration of particular technical methods. However, neither paper offers a clear interpretation of the process of quality assurance; when and how it should be conducted, and what it should seek to achieve. One paper synthesised other literature and described abstract principles of qualitative research that indicate quality, but it was not clear whether these were principles were intended as guidance for the research process or standards against which to evaluate the output. Similarly, the second paper argues that quality cannot be assured by predetermined techniques, but does not offer more constructive guidance. Perhaps it can be said that these two papers encapsulate the difficulties that have been faced within the qualitative research field with defining quality and articulating appropriate ways to assure that it reflects the principles of the qualitative approach, which itself is contested.

Synthesis of the two major narratives

The key features of the two major narratives emerging from the review, assuring quality by output and assuring quality by process, have been captured in Table 2 . This table details the perspectives held by each approach, the context in which the narratives are situated, how quality is conceptualised, and examples from the literature of recommended ways in which to assure quality.

The literature reviewed showed a lack of consensus between qualitative research approaches about how to assure quality of research. This reflects past and on-going debates among qualitative researchers about how to define quality, and even the nature of qualitative research itself. The two main narratives that emerged from the reviewed literature reflected differing approaches to quality assurance and, underpinning these differing conceptualisations of quality in qualitative research.

Among the literature that directly discusses quality assurance in qualitative research, the most dominant narrative detected was that of an output-oriented approach. Within this narrative, quality is conceptualised in relation to theoretical constructs such as validity or rigour, derived from the positivist paradigm, and is demonstrated by the inclusion of certain recommended methodological techniques. By contrast, the second, process-oriented narrative presented conceptualisations of quality that were linked to principles or values considered inherent to the qualitative approach, to be understood and enacted throughout the research process. A third, minor narrative offered critique of current and recent discourses on assuring quality of qualitative research but did not appear to offer alternative ways by which to conceptualise or conduct quality assurance.

Strengths of the output-oriented approach for assuring quality of qualitative studies include the acceptability and credibility of this approach within the dominant positivist environment where decision-making is based on 'objective' criteria of quality [ 11 ]. Checklists equip those unfamiliar with qualitative research with the means to assess its quality [ 6 ]. In this way, qualitative research can become more widely accessible, accepted and integrated into decision-making. This has been demonstrated in the increasing presence of qualitative studies in leading medical research journals [ 11 , 12 ]. However, as argued by those contributing to the second narrative in this review, the following of check-lists does not equate with understanding of and commitment to the theoretical underpinnings of qualitative paradigms or what constitutes quality within the approach. The privileging of guidelines as a mechanism to demonstrate quality can mislead inexperienced qualitative researchers as to what constitutes good qualitative research. This runs the risk of reducing qualitative research to a limited set of methods, requiring little theoretical expertise [ 52 ] and diverting attention away from the analytic content of research unique to the qualitative approach [ 14 ]. Ultimately, one can argue that a solely output-oriented approach risks the values of qualitative research becoming skewed towards the demands of the positivist paradigm without retaining quality in the substance of the research process.

By contrast, strengths of the process-oriented approach include the ability of the researcher to address the quality of their research in relation to the core principles or values of qualitative research (see Table 2 ). For example, previous assumptions that incorporating participant-observation methods over an extended period of time in 'the field' constituted 'good' anthropology and an indicator of quality have been challenged on the basis that fieldwork as a method should not be conducted uncritically [ 53 ], without acknowledgement of other important steps, including exploring variability and contradiction [ 54 ], and being explicit about methodological choices made and the theoretical reasons behind them [ 55 ]. The core principles identified in this narrative also represent continuous, researcher-led activities, rather than externally-determined indicators such as validity, or end-points. Reflexivity, for example, is an active, iterative process [ 56 ], described as ' an attitude of attending systematically to the context of knowledge construction... at every step of the research process' [p484, 23]. As such, this approach emphasises the need to consider quality throughout the whole course of research, and locates the responsibility for enacting good qualitative research practice firmly in the lap of the researcher(s).

The question remains, however, as to how researchers can demonstrate to others that core principles have guided their research process. The paucity of guidelines among those advocating a process-oriented approach suggests these are either not possible or not desirable to disseminate. Guidelines, by their largely fixed nature, could be considered incompatible with flexible, pluralistic, qualitative research. Awareness and understanding of the fundamental principles of qualitative research (such as those six identified in this review) could be considered sufficient to ensure that researchers conduct the whole research process to a high standard. Indeed, it could be argued that this type of approach has been promoted within qualitative research fields beyond the health sciences for several decades, since debates around how to do 'good' qualitative research emerged publically [ 41 , 43 , 51 ]. However, the premises of this approach are challenged by increasing scrutiny over the accuracy and ethics of the generation of information through scientific activity [ 57 , 58 ]. Previous critiques of a post-hoc evaluation approach to quality, in favour of procedural mechanisms to ensure good research [ 43 ], have not responded to the demand in some research contexts, particularly in global health, for externally demonstrable quality assurance procedures.

The authors propose, therefore, that some form of guidelines may be possible and desirable, although in a less structured format than those representing a more positivistic paradigm and based on researcher-led principles of good practice rather than externally-determined constructs of quality such as validity. However, first it is important to acknowledge some of the limitations of our search and interpretations.

Limitations

The number of papers included in the review was relatively low. The search was limited to publications explicitly focused on 'quality assurance', and the inclusion criteria may have excluded relevant literature that uses different terminologies, particularly as this concept has not commonly been used within qualitative methods literature. As has been demonstrated in the narratives identified, approaches to quality assurance are linked closely to conceptualisations of quality, about which there is a much larger body of literature than was reviewed for this paper. The possibility of these publications being missed, along with other hard-to-find and grey literature, has implications for the robustness of the narratives identified.

This limitation is perhaps most evident in the lack of literature in this review identified from the field of anthropology. Debates around concepts such as validity and what constitutes 'knowledge' from research have long been of interest to anthropologists [ 55 ], but the absence of these in the publications which met the inclusion criteria raises questions about the search strategy used. Although the search strategy was revised iteratively during the search process to capture variations of quality assurance, anthropological references did not emerge. The choice was made not to pursue the search further for practical and time-related reasons, but also as we felt that limiting the review to quality assurance as originally described would be useful for understanding the literature that a researcher would likely encounter when exploring quality assurance of qualitative research. The lack of clear anthropological voice in this literature reflects the paucity of engagement with the theoretical basis of this discipline in the health sciences, unlike other social sciences such as sociology [ 52 ]. As such, anthropology's contributions to debates on qualitative research methods within health and medical research have been somewhat overlooked [ 59 ].

Hence, this review presents only a part of the discourse of assuring quality of qualitative research, but it does reflect the part that has dominated the fields of health and medical research. Although this review leaves some unanswered questions about defining and assuring quality across different qualitative disciplines, we believe it gives a valuable insight into the types of narratives a typical researcher would begin to engage with if coming from a global health research perspective.

Recommendations

The narratives emerging from this literature review indicate the challenges related to approaching quality assurance from a perspective shaped by the positivist fields of evidence-based medicine, but also the lack of clear, structured guidance based on the intrinsic principles of qualitative research. We recommend that the strengths of both the output-oriented and process-oriented narratives be brought together to create guidance that reflects core principles of qualitative research but also responds to expectations of the global health field for explicitly assured quality in research. The fundamental principles characterising qualitative research, such as the six presented in Table 2 , offer the basis of an approach to assuring quality that is reflexive of and appropriate to the specific values of qualitative research.

The next step in developing guidance should focus on identifying practical and specific advice to researchers as to how to engage with these principles and demonstrate enactment of the principles at each stage of the research process while being wary of promoting unthinking use of 'technical fixes' [ 6 ]. We recommend the development of a framework that helps researchers to identify their core principles, appropriate for their epistemological and methodological approach, and ways to demonstrate that these have been upheld throughout the research process. Current generic quality assurance activities, such as the use of standard operating procedures (SOPs) and monitoring visits could be attuned to the principles of the qualitative research being undertaken through an approach that demonstrates quality without constraining the research or compromising core principles. The development of such a framework should be undertaken in a collaborative way between researchers and field teams undertaking qualitative research in practice. We propose that this framework be flexible enough to accommodate different qualitative methodologies without dictating essential activities for promoting quality. Unlike previous guidance, we propose the framework should also respond to different demands from multi-disciplinary research teams and from external, positivist, audiences for evidence of quality assurance procedures, as may be faced, for example, in the field of global health research. This review has also highlighted the challenges of accessing a broad range of literature from across different social science disciplines (in particular anthropology) when conducting searches using standard approaches adopted in the health sciences. Further consideration should be taken as to how best to encourage wider search parameters, familiarisation with different sources of literature and greater acceptance of non-traditional disciplinary perspectives within health and medical literature reviews.

Within the context of global health research, there is an increasing demand for the qualitative research field to move forwards in developing and establishing coherent mechanisms for quality assurance of qualitative research. The findings of this review have helped to clarify ways in which quality assurance has been conceptualised, and indicates a promising direction in which to take the next steps in this process. Yet, it also raises broader questions around how quality is conceptualised in relation to qualitative research, and how different qualitative disciplines and paradigms are represented in debates around the use of qualitative methods in health and medical research. We recommend the development of a flexible framework to help qualitative researchers to define, apply and demonstrate principles of quality in their research.

Gilson L, Hanson K, Sheikh K, Agyepong IA, Ssengooba F, Bennett S: Building the field of health policy and systems research: social science matters. PLoS Med. 2011, 8: e1001079

Article   PubMed   PubMed Central   Google Scholar  

Janes CR, Corbett KK: Anthropology and Global Health. Annual Review of Anthropology. 2009, 38: 167-183.

Article   Google Scholar  

Pope C: Resisting Evidence: The Study of Evidence-Based Medicine as a Contemporary Social Movement. Health:. 2003, 7: 267-282.

Google Scholar  

ICH: ICH Topic E 6 (R1) Guideline for Good Clinical Practice. Book ICH Topic E 6 (R1) Guideline for Good Clinical Practice. 1996, City: European Medicines Agency, Editor ed.^eds.

Good Clinical Practice: Frequently asked questions. [ http://www.mhra.gov.uk/Howweregulate/Medicines/Inspectionandstandards/GoodClinicalPractice/Frequentlyaskedquestions/index.htm#1 ]

Barbour RS: Checklists for improving rigour in qualitative research: a case of the tail wagging the dog?. British Medical Journal. 2001, 322: 1115-1117.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Dixon-Woods M, Shaw RL, Agarwal S, Smith JA: The problem of appraising qualitative research. Quality and Safety in Health Care. 2004, 13: 223-225.

ACT Consortium. [ http://www.actconsortium.org ]

Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O, Peacock R: Storylines of research in diffusion of innovation: a meta-narrative approach to systematic review. Social Science & Medicine. 2005, 61: 417-430.

Greenhalgh T, Potts H, Wong G, al e: Tensions and Paradoxes in Electronic Patient Record Research: A Systematic Literature Review Using the Meta-narrative Method. The Milbank Quarterly. 2009, 87: 729-788.

Stige B, Malterud K, Midtgarden T: Toward an Agenda for Evaluation of Qualitative Research. Qualitative Health Research. 2009, 19: 1504-1516.

Article   PubMed   Google Scholar  

Pope C, Mays N: Critical reflections on the rise of qualitative research. BMJ. 2009, 339: b3425

Dixon-Woods M, Fitzpatrick R, Roberts K: Including qualitative research in systematic reviews: opportunities and problems. Journal of Evaluation in Clinical Practice. 2001, 7: 125-133.

Article   CAS   PubMed   Google Scholar  

Eakin JM, Mykhalovskiy E: Reframing the evaluation of qualitative health research: reflections on a review of appraisal guidelines in the health sciences. Journal of Evaluation in Clinical Practice. 2003, 9: 187-194.

Plochg T, van Zwieten M: Guidelines for quality assurance in health and health care research: Qualitative Research. Book Guidelines for quality assurance in health and health care research: Qualitative Research. 2002, City: Amsterdam Centre for Health and Health Care Research, Editor ed.^eds.

Boulton M, Fitzpatrick R: 'Quality' in qualitative research. Critical Public Health. 1994, 5: 19-26.

Bradbury-Jones C: Enhancing rigour in qualitative health research: exploring subjectivity through Peshkin's I's. Journal of Advanced Nursing. 2007, 59: 290-298.

Devers K: How will we know "good" qualitative research when we see it? Beginning the dialogue in health services research. Health Services Research. 1999, 34: 1153-1188.

CAS   PubMed   PubMed Central   Google Scholar  

Green J, Britten N: Qualitative research and evidence based medicine. British Medical Journal. 1998, 316: 1230-1232.

Kitto SC, Chesters J, Grbich C: Quality in qualitative research. Medical Journal of Australia. 2008, 188: 243-246.

PubMed   Google Scholar  

Koch T: Establishing rigour in qualitative research: the decision trail. Journal of Advanced Nursing. 1994, 19: 976-986.

Macdonald ME: Growing Quality in Qualitative Health Research. International Journal of Qualitative Methods. 2009, 8: 97-101.

Malterud K: Qualitative research: standards, challenges, and guidelines. The Lancet. 2001, 358: 483-488.

Article   CAS   Google Scholar  

Mays N, Pope C: Assessing quality in qualitative research. British Medical Journal. 2000, 320: 50-52.

McBrien B: Evidence-based care: enhancing the rigour of a qualitative study. British Journal of Nursing. 2008, 17: 1286-1289.

Nelson AM: Addressing the threat of evidence-based practice to qualitative inquiry through increasing attention to quality: A discussion paper. International Journal of Nursing Studies. 2008, 45: 316-322.

Peck E, Secker J: Quality criteria for qualitative research: does context make a difference?. Qualitative Health Research. 1999, 9: 552-558.

Rolfe G: Validity, trustworthiness and rigour: quality and the idea of qualitative research. Journal of Advanced Nursing. 2006, 53: 304-310.

Ryan-Nicholls KD, Will CI: Rigour in qualitative research: mechanisms for control. Nurse Researcher. 2009, 16: 70-85.

Secker J, Wimbush E, Watson J, Milburn K: Qualitative methods in health promotion research: some criteria for quality. Health Education Journal. 1995, 54: 74-87.

Tobin GA, Begley CM: Methodological rigour within a qualitative framework. Journal of Advanced Nursing. 2004, 48: 388-396.

Whittemore R, Chase SK, Mandle CL: Validity in Qualitative Research. Qualitative Health Research. 2001, 11: 522-537.

Akkerman S, Admiraal W, Brekelmans M, al e: Auditing quality of research in social sciences. Quality and quantity. 2008, 42 (2): 257-274.

Bergman MM, Coxon APM: The Quality in Qualitative Methods. Forum Qualitative Sozialforschung/Forum: Qualitative Social Research. 2005, 6:

Brown A: Qualitative method and compromise in applied social research. Qualitative Research. 2010, 10 (2): 229-248.

Dale A: Editorial: Quality in Social Research. International Journal of Social Research Methodology. 2006, 9: 79-82.

Flick U: Managing quality in qualitative research. 2007, London: Sage Publications

Book   Google Scholar  

Koro-Ljungberg M: Validity, responsibility, and aporia. Qualitative inquiry. 2010, 16 (8): 603-610.

Lewis J: Redefining Qualitative Methods: Believability in the Fifth Moment. International Journal of Qualitative Methods. 2009, 8: 1-14.

Research Information Network: Quality assurance and assessment of quality research. Book Quality assurance and assessment of quality research. 2010, City: Research Information Network, Editor ed.^eds.

Seale C: The Quality of Qualitative Research. 1999, London: SAGE Publications

Tracy SJ: Qualitative Quality: Eight "Big-Tent" Criteria for Excellent Qualitative Research. Qualitative inquiry. 2010, 16: 837-851.

Morse JM, Barrett M, Mayan M, Olson K, Spiers J: Verification Strategies for Establishing Reliability and Validity in Qualitative Research. International Journal of Qualitative Methods. 2002, 1: 1-19.

Johnson RB: Examining the validity structure of qualitative research. Education. 1997, 118: 282

Creswell JW, Miller DL: Determining Validity in Qualitative Inquiry. Theory Into Practice. 2000, 39: 124

Torrance H: Building confidence in qualitative research: engaging the demands of policy. Qualitative inquiry. 2008, 14 (4): 507-527.

Shenton AK: Strategies for ensuring trustworthiness in qualitative research projects. Education for Information. 2004, 22: 63-75.

Barker M: Assessing the 'Quality' in Qualitative Research. European Journal of Communication. 2003, 18: 315-335.

Forrest Keenan K, van Teijlingen E: The quality of qualitative research in family planning and reproductive health care. Journal of Family Planning and Reproductive Health Care. 2004, 30: 257-259.

Becker S, Bryman A, Sempik J: Defining 'Quality' in Social Policy Research: Views, Perceptions and a Framework for Discussion. Book Defining 'Quality' in Social Policy Research: Views, Perceptions and a Framework for Discussion. 2006, City: Social Policy Association, Editor ed.^eds.

Lincoln YS, Guba EG: Naturalistic inquiry. 1985, Beverly Hills, CA: SAGE Publications

Lambert H, McKevitt C: Anthropology in health research: from qualitative methods to multidisciplinarity. British Medical Journal. 2002, 325: 210-213.

Gupta A, Ferguson J: Introduction-discipline and practice: "the field" as site, method, and location in anthropology". Anthropological locations: boundaries and grounds of a field science. Edited by: Gupta A, Ferguson J. 1997, Berkeley: University of California Press, 1-46.

Manderson L, Aaby P: An epidemic in the field? Rapid assessment procedures and health research. Social Science & Medicine. 1992, 35: 839-850.

Sanjek R: On ethnographic validity. Fieldnotes: the makings of anthropology. Edited by: Sanjek R. 1990, Ithaca, NY: Cornell University Press, 385-418.

Barry C, Britten N, Barber N, al e: Using reflexivity to optimize teamwork in qualitative research. Qualitative Health research. 1999, 9: 26-44.

Murphy E, Dingwall R: Informed consent, anticipatory regulation and ethnographic practice. Social Science & Medicine. 2007, 65: 2223-2234.

Glickman SW, McHutchison JG, Peterson ED, Cairns CB, Harrington RA, Califf RM, Schulman KA: Ethical and Scientific Implications of the Globalization of Clinical Research. New England Journal of Medicine. 2009, 360: 816-823.

Savage J: Ethnography and health care. BMJ. 2000, 321: 1400-1402.

Denzin N, Lincoln YS: Introduction: the discipline and practice of qualitative research. The SAGE Handbook of Qualitative Research. Edited by: Denzin N, Lincoln YS. 2005, Thousand Oaks, CA: SAGE, 3

Download references

Acknowledgements and funding

The authors would like to acknowledge with gratitude the input and insights of Denise Allen in developing the discussion and recommendations of this paper, and in particular, offering an important anthropological voice. JR, JK, PM and CC have full salary support and NE and EA have partial salary support from the ACT Consortium, which is funded through a grant from the Bill & Melinda Gates Foundation to the London School of Hygiene and Tropical Medicine.

Author information

Authors and affiliations.

Department of Global Health & Development, London School of Hygiene & Tropical Medicine, London, UK

Joanna Reynolds & Clare Chandler

Infectious Diseases Research Collaboration, Mulago Hospital Complex, Kampala, Uganda

James Kizito

Department of Sociology/Anthropology, University of Nigeria, Nsukka, Nigeria

Nkoli Ezumah

National Institute for Medical Research, Amani Centre, Muheza, Tanzania

Peter Mangesho

Division of Clinical Pharmacology, Department of Medicine, University of Cape Town, Cape Town, South Africa

Elizabeth Allen

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Joanna Reynolds .

Additional information

Competing interests.

The authors declare that they have no competing interests.

Authors' contributions

JR helped with the design of the review, searched for and reviewed the literature and wrote the first draft of the manuscript. JK, NE, PM and EA contributed to the interpretation of the results and the writing the manuscript. CC conceived of the review and helped with its design, interpretation of results and writing the manuscript. All authors read and approved the final manuscript.

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article.

Reynolds, J., Kizito, J., Ezumah, N. et al. Quality assurance of qualitative research: a review of the discourse. Health Res Policy Sys 9 , 43 (2011). https://doi.org/10.1186/1478-4505-9-43

Download citation

Received : 15 July 2011

Accepted : 19 December 2011

Published : 19 December 2011

DOI : https://doi.org/10.1186/1478-4505-9-43

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative
  • global health
  • quality assurance
  • meta-narrative
  • literature review

Health Research Policy and Systems

ISSN: 1478-4505

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

quality research papers

quality assurance Recently Published Documents

Total documents.

  • Latest Documents
  • Most Cited Documents
  • Contributed Authors
  • Related Sources
  • Related Keywords

Quality Assurance Information System-The Case of the TEI of Athens

Systematic assessment of data quality and quality assurance/quality control (qa/qc) of current research on microplastics in biosolids and agricultural soils, sigma metrics in quality control- an innovative tool.

The clinical laboratory in today’s world is a rapidly evolving field which faces a constant pressure to produce quick and reliable results. Sigma metric is a new tool which helps to reduce process variability, quantitate the approximate number of analytical errors, and evaluate and guide for better quality control (QC) practices.To analyze sigma metrics of 16 biochemistry analytes using ERBA XL 200 Biochemistry analyzer, interpret parameter performance, compare analyzer performance with other Middle East studies and modify existing QC practices.This study was undertaken at a clinical laboratory for a period of 12 months from January to December 2020 for the following analytes: albumin (ALB), alanine amino transferase (SGPT), aspartate amino transferase (SGOT), alkaline phosphatase (ALKP), bilirubin total (BIL T), bilirubin direct (BIL D), calcium (CAL), cholesterol (CHOL), creatinine (CREAT), gamma glutamyl transferase (GGT), glucose (GLUC), high density lipoprotein (HDL), triglyceride (TG), total protein (PROT), uric acid (UA) and urea. The Coefficient of variance (CV%) and Bias % were calculated from internal quality control (IQC) and external quality assurance scheme (EQAS) records respectively. Total allowable error (TEa) was obtained using guidelines Clinical Laboratories Improvement Act guidelines (CLIA). Sigma metrics was calculated using CV%, Bias% and TEa for the above parameters. It was found that 5 analytes in level 1 and 8 analytes in level 2 had greater than 6 sigma performance indicating world class quality. Cholesterol, glucose (level 1 and 2) and creatinine level 1 showed &#62;4 sigma performance i.e acceptable performance. Urea (both levels) and GGT (level 1) showed &#60;3 sigma and were therefore identified as the problem analytes. Sigma metrics helps to assess analytic methodologies and can serve as an important self assessment tool for quality assurance in the clinical laboratory. Sigma metric evaluation in this study helped to evaluate the quality of several analytes and also categorize them from high performing to problematic analytes, indicating the utility of this tool. In conclusion, parameters showing lesser than 3 sigma need strict monitoring and modification of quality control procedure with change in method if necessary.

SISTEM PENJAMINAN MUTU INTERNAL (SPMI)

Abstract: The purpose of this research is to look at the educational achievements of students through an internal quality assurance system and as a tool to achieve and maintain school progress. Research, with a quantative approach. The data obtained is obtained through interview techniques, observations, and library studies. The results of the study were analyzed by using data reduction, presentation of data and drawing conclusions. The findings of the meaning of the importance of SPMI are implemented in elementary school educational institutions. The study was conducted at one of SMAN 3 Wajo's schools. The results of this study show that: (1) SPMI which is carried out continuously contributes to the acquisition of superior accreditation ratings. (2) The SPMI cycle that is carried out in its entirety has guided the course of various tasks from school stakeholders. (3) Quality culture can be created through the implementation of SPMI.Keywords: Internal Quality Assurance System; Quality of SMAN 3 Wajo School

Quality assurance for on‐table adaptive magnetic resonance guided radiation therapy: A software tool to complement secondary dose calculation and failure modes discovered in clinical routine

Editorial comment: factors impacting us-lirads visualization scores—optimizing future quality assurance and standards, the association of laryngeal position on videolaryngoscopy and time taken to intubate using spatial point pattern analysis of prospectively collected quality assurance data, the impact of policy changes, dedicated funding and implementation support on early intervention programs for psychosis.

Introduction Early intervention services for psychosis (EIS) are associated with improved clinical and economic outcomes. In Quebec, clinicians led the development of EIS from the late 1980s until 2017 when the provincial government announced EIS-specific funding, implementation support and provincial standards. This provides an interesting context to understand the impacts of policy commitments on EIS. Our primary objective was to describe the implementation of EIS three years after this increased political involvement. Methods This cross-sectional descriptive study was conducted in 2020 through a 161-question online survey, modeled after our team's earlier surveys, on the following themes: program characteristics, accessibility, program operations, clinical services, training/supervision, and quality assurance. Descriptive statistics were performed. When relevant, we compared data on programs founded before and after 2017. Results Twenty-eight of 33 existing EIS completed the survey. Between 2016 and 2020, the proportion of Quebec's population having access to EIS rose from 46% to 88%; >1,300 yearly admissions were reported by surveyed EIS, surpassing governments’ epidemiological estimates. Most programs set accessibility targets; adopted inclusive intake criteria and an open referral policy; engaged in education of referral sources. A wide range of biopsychosocial interventions and assertive outreach were offered by interdisciplinary teams. Administrative/organisational components were less widely implemented, such as clinical/administrative data collection, respecting recommended patient-to-case manager ratios and quality assurance. Conclusion Increased governmental implementation support including dedicated funding led to widespread implementation of good-quality, accessible EIS. Though some differences were found between programs founded before and after 2017, there was no overall discernible impact of year of implementation. Persisting challenges to collecting data may impede monitoring, data-informed decision-making, and quality improvement. Maintaining fidelity and meeting provincial standards may prove challenging as programs mature and adapt to their catchment area's specificities and as caseloads increase. Governmental incidence estimates may need recalculation considering recent epidemiological data.

Current Status of Quality Assurance Scheme in Selected Undergraduate Medical Colleges of Bangladesh

This descriptive cross sectional study was carried out to determine the current status of Quality Assurance Scheme in undergraduate medical colleges of Bangladesh. This study was carried out in eight (four Government and four Non- Government) medical colleges in Bangladesh over a period from July 2015 to June 2016. The present study had an interview schedule with open question for college authority and another interview schedule with open question for head of department of medical college. Study revealed that 87.5% of college had Quality Assurance Scheme (QAS) in their college, 75% of college authority had regular meeting of academic coordination committee in their college, 50% of college had active Medical Education Unit in their college, 87.5% of college authority said positively on publication of journal in their college. In the present study researchers interviewed 53 heads of department with open question about distribution, collection of personal review form, submission with recommendation to the academic co-coordinator, and annual review meeting of faculty development. The researchers revealed from the interviews that there is total absence of this practice which is directed in national guidelines and tools for Quality Assurance Scheme (QAS) for medical colleges of Bangladesh. Bangladesh Journal of Medical Education Vol.13(1) January 2022: 33-39

AN APPLICATION OF CADASTRAL FABRIC SYSTEM IN IMPROVING POSITIONAL ACCURACY OF CADASTRAL DATABASES IN MALAYSIA

Abstract. Cadastral fabric is perceived as a feasible solution to improve the speed, efficiency and quality of the cadastral measurement data to implement Positional Accuracy Improvement (PAI) and to support Coordinated Cadastral System (CCS) and Dynamic Coordinated Cadastral System (DCCS) in Malaysia. In light of this, this study aims to propose a system to upgrade the positional accuracy of the existing cadastral system through the utilisation of the cadastral fabric system. A comprehensive investigation on the capability of the proposed system is carried out. A total of four evaluation aspects is incorporated in the study to investigate the feasibility and capability of the software, viz. performance of geodetic least squares adjustment, quality assurance techniques, supporting functions, and user friendliness. This study utilises secondary data obtained from the Department of Surveying and Mapping Malaysia (DSMM). The test area is coded as Block B21701 which is located in Selangor, Malaysia. Results show that least square adjustment for the entire network is completed in a timely manner. Various quality assurance techniques are implementable, namely error ellipses, magnitude of correction vectors and adjustment trajectory, as well as inspection of adjusted online bearings. In addition, the system supports coordinate versioning, coordinates of various datum or projection. Last but not least, user friendliness of the system is identified through the software interface, interaction and automation functions. With that, it is concluded that the proposed system is highly feasible and capable to create a Cadastral Fabric to improve the positional accuracy of existing cadastral system used in Malaysia.

Export Citation Format

Share document.

  • Even more »

Account Options

quality research papers

  • Try the new Google Books
  • Advanced Book Search

Get this book in print

  • Barnes&Noble.com
  • Books-A-Million
  • Find in a library
  • All sellers  »

Other editions - View all

Common terms and phrases, about the author  (2014).

Nancy Vyhmeister (EdD, Andrews University) has fifty years of experience in teaching future pastors and professors not only in the United States but throughout the world. She continues to have a global ministry in her retirement years. She has authored several books, both in Spanish and English, including a Greek grammar book for Spanish-speaking students. She was editor of Women in Ministry: Biblical and Historical Perspectives. She currently resides with her husband in Loma Linda, California.

Terry Robertson (MA, Andrews University, MLS Indiana University) is Seminary Librarian at Andrews University and teaches the master's level research course at the seminary.

Bibliographic information

The dual dimension of scientific research experience acquisition and its development: a 40-year analysis of Chinese Humanities and Social Sciences Journals

  • Published: 18 May 2024

Cite this article

quality research papers

  • Kun Chen 1 , 2 ,
  • Xia-xia Gao 3 ,
  • Yi-di Huang 4 , 5 ,
  • Wen-tao Xu 6 &
  • Guo-liang Yang   ORCID: orcid.org/0000-0001-9781-4446 4 , 5 , 7  

Scientific experience is crucial for producing high-quality research, and the approach of acquisition can significantly impact its accumulation rate. We present a framework for scientific experience acquisition that outlines the dual dimensions of experience accumulation: self-accumulation and accumulation under senior expert guidance. To validate the framework, we conducted a case study using 2,957,700 papers from all 568 Chinese humanities and social science journals, taking into account the limitations of the international journal system. Our findings reveal that self-accumulation has been gradually declining, decreasing from 57.67% in 1980 to 4.55% in 2020. Conversely, accumulation under senior expert guidance has been steadily increasing, rising from 5.7% in 1980 to 28.69% in 2020. Furthermore, the proportion of the two approaches varies by discipline. Social sciences such as Psychology, Economics, and Management, which rely more on large teams and collaborative research, have a higher proportion of accumulation under senior expert guidance than humanities disciplines like Art, History, and Philosophy, which depend more on individual research. Finally, this research also offers a distinctive exploration of the question posed by the US National Science and Technology Council (2008): how and why do communities of innovation form and evolve.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA) Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

quality research papers

The term “senior expert” in this context refers to individuals who can share their experiences with newcomers entering academia, not just a student's supervisor.

The Chinese version of Social Science Citation Index.

The data is sourced from the National Name Report, published by China's Ministry of Public Security. This report includes a count of common surnames and the most frequently used first names across different generations. By combining common surnames with the most popular first names, a list of common names is generated.

Adams, J. (2012). The rise of research networks. Nature, 490 (7420), 335–336.

Article   Google Scholar  

Amjad, T., Ding, Y., Xu, J., Zhang, C., Daud, A., Tang, J., & Song, M. (2017). Standing on the shoulders of giants. Journal of Informetrics, 11 (1), 307–323.

Baker, V. L., & Pifer, M. J. (2011). The role of relationships in the transition from doctoral student to independent scholar. Studies in Continuing Education, 33 (1), 5–17.

Cao, C., Li, N., Li, X., & Liu, L. (2013). Reforming China’s S&T system. Science, 341 (6145), 460–462.

Clauset, A., Arbesman, S., & Larremore, D. B. (2015). Systematic inequality and hierarchy in faculty hiring networks. Science Advances, 1 (1), e1400005.

Daud, A., Abbasi, R., & Muhammad, F. (2013). Finding rising stars in social networks. In W. Meng, L. Feng, S. Bressan, W. Winiwarter, & W. Song (Eds.), Database systems for advanced applications: 18th international conference, DASFAA 2013. Proceedings, part I 18 (pp. 13–24). Springer.

Chapter   Google Scholar  

Daud, A., Ahmad, M., Malik, M. S. I., & Che, D. (2015). Using machine learning techniques for rising star prediction in co-author network. Scientometrics, 102 , 1687–1711.

Golestan, S., Ramezani, M., Guerrero, J. M., Freijedo, F. D., & Monfared, M. (2014). Moving average filter based phase-locked loops: Performance analysis and design guidelines. IEEE Transactions on Power Electronics, 29 (6), 2750–2763.

Hackett, E. J., Amsterdamska, O., Lynch, M., & Wajcman, J. (2008). The handbook of science and technology studies (3rd ed.). MIT Press.

Google Scholar  

Hicks, D., Wouters, P., Waltman, L., De Rijcke, S., & Rafols, I. (2015). Bibliometrics: The Leiden Manifesto for research metrics. Nature, 520 (7548), 429–431.

Hilborn, R., & Liermann, M. (1998). Standing on the shoulders of giants: Learning from experience in fisheries. Reviews in Fish Biology and Fisheries, 8 , 273–283.

Kalaitzidakis, P., Mamuneas, T. P., & Stengos, T. (2003). Rankings of academic journals and institutions in economics. Journal of the European Economic Association , 1 (6), 1346–1366.

Larivière, V., Ni, C., Gingras, Y., Cronin, B., & Sugimoto, C. R. (2013). Bibliometrics: Global gender disparities in science. Nature, 504 (7479), 211–213.

Li, W., Aste, T., Caccioli, F., & Livan, G. (2018). Reciprocity and success in academic careers. http://arxiv.org/abs/1808.03781

Li, W., Aste, T., Caccioli, F., & Livan, G. (2019). Early coauthorship with top scientists predicts success in academic careers. Nature Communications, 10 (1), 5170.

Liénard, J. F., Achakulvisut, T., Acuna, D. E., & David, S. V. (2018). Intellectual synthesis in mentorship determines success in academic careers. Nature Communications, 9 (1), 4840.

Malmgren, R. D., Ottino, J. M., & Nunes Amaral, L. A. (2010). The role of mentorship in protégé performance. Nature, 465 (7298), 622–626.

Merton, R. K. (1968). The Matthew effect in science: The reward and communication systems of science are considered. Science, 159 (3810), 56–63.

National Science and Technology Council. (2008). The science of science policy: A federal research roadmap . National Science and Technology Council.

Oromaner, M. (1977). Professional age and the reception of sociological publications: A test of the Zuckerman-Merton hypothesis. Social Studies of Science, 7 (3), 381–388.

Petersen, A. M., Jung, W. S., Yang, J. S., & Stanley, H. E. (2011). Quantitative and empirical demonstration of the Matthew effect in a study of career longevity. Proceedings of the National Academy of Sciences of the United States of America, 108 (1), 18–23.

Petersen, A. M., Riccaboni, M., Stanley, H. E., & Pammolli, F. (2012). Persistence and uncertainty in the academic career. Proceedings of the National Academy of Sciences of the United States of America, 109 (14), 5213–5218.

Pfeiffer, M., Fischer, M. R., & Bauer, D. (2016). Publication activities of German junior researchers in academic medicine: Which factors impact impact factors? BMC Medical Education, 16 (1), 1–10.

Price, D. J. D. S. (1965). Networks of scientific papers: The pattern of bibliographic references indicates the nature of the scientific research front. Science, 149 (3683), 510–515.

Qi, M., Zeng, A., Li, M., Fan, Y., & Di, Z. (2017). Standing on the shoulders of giants: the effect of outstanding scientists on young collaborators’ careers. Scientometrics, 111 , 1839–1850.

Reyes, P., Reviriego, P., Maestro, J. A., & Ruano, O. (2007). New protection techniques against SEUs for moving average filters in a radiation environment. IEEE Transactions on Nuclear Science, 54 (4), 957–964.

Sekara, V., Deville, P., Ahnert, S. E., Barabási, A. L., Sinatra, R., & Lehmann, S. (2018). The chaperone effect in scientific publishing. Proceedings of the National Academy of Sciences of the United States of America, 115 (50), 12603–12607.

Sitkin, S. B. (1992). Learning through failure: The strategy of small losses. Research in Organizational Behavior, 14 , 231–266.

Srivastava, S. R., Meena, Y. K., & Singh, G. (2022). Forecasting on Covid-19 infection waves using a rough set filter driven moving average models. Applied Soft Computing, 131 , 109750.

Strumpfer, D. J. W. (2005). Standing on the shoulders of giants: Notes on early positive psychology (Psychofortology). South African Journal of Psychology, 35 (1), 21–45.

Sun, Y., & Cao, C. (2021). Planning for science: China’s “grand experiment” and global implications. Humanities and Social Sciences Communications, 8 (1), 1–9.

Sweitzer, V. (2009). Towards a theory of doctoral student professional identity development: A developmental networks approach. The Journal of Higher Education, 80 (1), 1–33.

Taheri, B., & Sedighizadeh, M. (2021). A moving window average method for internal fault detection of power transformers. Cleaner Engineering and Technology, 4 , 100195.

Taleb, N. N. (2014). Antifragile: Things that gain from disorder (Vol. 3). Random House Trade Paperbacks.

Torvik, V. I., & Smalheiser, N. R. (2009). Author name disambiguation in MEDLINE. ACM Transactions on Knowledge Discovery from Data (TKDD), 3 (3), 1–29.

Torvik, V. I., Weeber, M., Swanson, D. R., & Smalheiser, N. R. (2005). A probabilistic similarity metric for Medline records: A model for author name disambiguation. Journal of the American Society for Information Science and Technology, 56 (2), 140–158.

Ubfal, D., & Maffioli, A. (2011). The impact of funding on research collaboration: Evidence from a developing country. Research Policy, 40 (9), 1269–1279.

Wang, H., & Zhang, T. (2019). Co-authorship analyses of management journals: Contrast with science and humanities disciplines. Journal of Documentation and Data, 1 (3), 81–95. (In Chinese).

Wang, Y., Jones, B. F., & Wang, D. (2019). Early-career setback and future career impact. Nature Communications, 10 (1), 4331.

Way, S. F., Morgan, A. C., Larremore, D. B., & Clauset, A. (2019). Productivity, prominence, and the effects of academic environment. Proceedings of the National Academy of Sciences of the United States of America, 116 (22), 10729–10733.

Zhang, B., & Al Hasan, M. (2017, November). Name disambiguation in anonymized graphs using network embedding. In Proceedings of the 2017 ACM on conference on information and knowledge management (pp. 1239–1248).

Zuckerman, H. (1967). Nobel laureates in science: Patterns of productivity, collaboration, and authorship. American Sociological Review, 32 (3), 391–403.

Download references

Acknowledgements

This work was financially supported by the Natural Science Foundation of Xinjiang Uygur Autonomous Region (Grant No. 2023D01C28), National Natural Science Foundation of China (Grant No. 72071196), the Ph.D. Scientific research Start-up Project of Xinjiang University (Grant No. BS202104), the Tianchi Doctoral Project of Xinjiang (Grant No. TCBS202050) and the Xinjiang High-level Talents Tianchi Program (Grant No. TCBR202104).

Author information

Authors and affiliations.

School of Politics and Public Administration, Xinjiang University, Urumqi, 830046, China

Collaborative Innovation Center for National Security Research, Xinjiang University, Urumqi, 830046, China

School of Marxism, Xinjiang University, Urumqi, 830046, China

Xia-xia Gao

Institutes of Science and Development, Chinese Academy of Sciences, Beijing, 100190, China

Yi-di Huang & Guo-liang Yang

University of Chinese Academy of Sciences, Beijing, 100049, China

CECEP Talroad Technology Company, Urumqi, 830046, China

School of Economics, Beijing Technology and Business University, Beijing, 100048, China

Guo-liang Yang

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Guo-liang Yang .

Ethics declarations

Conflict of interest.

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Informed consent

1) This article has not been published in whole or in part elsewhere; 2) the manuscript is not currently being considered for publication in another journal; 3) all authors have been personally and actively involved in substantive work leading to the manuscript, and will hold themselves jointly and individually responsible for its content.

Research involving human participants and/or animals

This article does not include any research involving humans or animals.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

See Table  8 .

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Chen, K., Gao, Xx., Huang, Yd. et al. The dual dimension of scientific research experience acquisition and its development: a 40-year analysis of Chinese Humanities and Social Sciences Journals. Scientometrics (2024). https://doi.org/10.1007/s11192-024-05002-6

Download citation

Received : 21 September 2023

Accepted : 18 March 2024

Published : 18 May 2024

DOI : https://doi.org/10.1007/s11192-024-05002-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Experience acquisition
  • Self-accumulation
  • Authorship order
  • Chinese Humanities and Social Sciences
  • Find a journal
  • Publish with us
  • Track your research

Budget 2024-25 - home

Cost of living help and a future made in Australia

Strengthening medicare and the care economy.

Building a better health system than improves outcomes

Print or save page

On this page

High‑quality health services through Medicare

Boosting access to essential health services

Building a better healthcare system

The Government is investing $2.8 billion to continue its commitment to strengthen Medicare. This includes the $1.2 billion package to address pressures facing the health system, which provides:

  • $882.2 million to support older Australians avoid hospital admission, be discharged from hospital earlier and improve their transition out of hospital to other appropriate care.
  • $227 million to deliver a further 29 Medicare Urgent Care Clinics and boost support for regional and remote clinics. This will increase the total number of clinics across Australia to 87. Since commencing last year, existing clinics have already provided almost 400,000 bulk‑billed visits.
  • $90 million to address health workforce shortages by making it simpler and quicker for international health practitioners to work in Australia.

quality research papers

Rohan’s daughter Zoya has been  off school with a runny nose and a cough. By 6pm, she is lethargic and has a fever.

Rohan is concerned because his regular GP is now closed. Instead of waiting for hours at the emergency department, he takes Zoya to a Medicare Urgent Care Clinic, without having to make an appointment. 

During the bulk billed visit, Zoya is diagnosed with an infection by the doctor and prescribed appropriate medication. Rohan and Zoya leave within an hour of arrival. Zoya makes a full recovery.

Improving health outcomes

Almost half of Australians live with a chronic condition. This Budget will provide $141.1 million for research and services for people living with chronic conditions, including bowel and skin cancer, diabetes and dementia.

To improve health outcomes, the Government is providing:

  • Support for Australians to enjoy healthier, more active lives by investing $132.7 million in sport participation and performance programs.
  • $825.7 million to ensure Australians can continue to access testing for and vaccinations against COVID‑19. The Government is also ensuring continued access to oral antiviral medicines on the Pharmaceutical Benefits Scheme.
  • $41.6 million over two years to continue funding for alcohol and other drug treatment and support services, including the Good Sports alcohol management program for community sporting clubs.

The Government is allocating an additional $411.6 million (for a total $1.6 billion over 13 years) through the Medical Research Future Fund to continue existing research and introduce two new research missions for low‑survival cancers and reducing health inequities.

Improving access to medicines

The Government is investing $3.4 billion for new and amended listings to the Pharmaceutical Benefits Scheme, which means eligible patients can save on treatment costs.

By expanding the Closing the Gap Pharmaceutical Benefits Scheme Co‑payment Program, eligible First Nations patients will have free or cheaper access to all Pharmaceutical Benefits Scheme medicines.

Australians will benefit from $141.1 million to support and expand the National Immunisation Program.

Mental health support

The Government’s $888.1 million mental health package over eight years will help people get the care they need, while relieving pressure on the Better Access initiative and making it easier to access services.

A free, low‑intensity digital service will be established to address the gap for people with mild mental health concerns. From 1 January 2026, Australians will be able to access the service without a referral and receive timely, high‑quality mental health support. Once fully established, 150,000 people are expected to make use of this service each year.

The Government is improving access to free mental health services through a network of walk‑in Medicare Mental Health Centres, built on the established Head to Health network. The upgraded national network of 61 Medicare Mental Health Centres will open by 30 June 2026. They will provide clinical services for adults with moderate‑to‑severe mental health needs.

For Australians with complex mental health needs, funding will be provided for Primary Health Networks to partner with GPs to deliver multidisciplinary, wraparound support services and care coordination.

Improving the aged care system

Providing quality care

The Budget provides $2.2 billion to deliver aged care reforms and continue implementing recommendations from the Royal Commission into Aged Care Quality and Safety.

The new Aged Care Act will put the rights and needs of older people at the centre of the aged care system. The new Act will provide the framework for fundamental change within the aged care sector.

More Home Care Packages

The Government is investing $531.4 million to release an additional 24,100 Home Care Packages in 2024–25. This will help reduce average wait times and enable people to age at home if they prefer to do so.

Improving aged care regulation

Funding of $110.9 million over four years will support an increase in the Aged Care Quality and Safety Commission’s regulatory capabilities.

The Government is investing $1.2 billion in critical digital systems to support the introduction of the new Aged Care Act and contemporary IT systems.

The My Aged Care Contact Centre will receive $37 million to reduce call‑waiting times for people seeking information and access to aged care.

Higher wages for aged care workers

The Government has committed to fund the Fair Work Commission decision to increase the award wage for direct and indirect aged care workers once the final determination is made. This will build on the $11.3 billion already allocated to support the interim 15 per cent wage increase for aged care workers.

The Government is providing $87.2 million for workforce initiatives to attract nurses and other workers into aged care.

Reforming the disability sector

Better and more sustainable services

Getting the National Disability Insurance Scheme (NDIS) back on track

A further $468.7 million is being provided to support people with disability and get the NDIS back on track. This includes:

  • $214 million over two years to fight fraud and to co‑design NDIS reforms with people with disability, announced earlier this year
  • $160.7 million to upgrade the NDIS Quality and Safeguards Commission’s information technology
  • $45.5 million to establish a NDIS Evidence Advisory Committee
  • $20 million to start consultation and design on reforms to help NDIS participants and people with disability navigate services.

This builds on $732.9 million provided in the 2023–24 Budget.

In December 2023, National Cabinet agreed to work together to improve the experience of participants and restore the original intent of the Scheme to support people with permanent and significant disability, within a broader ecosystem of supports. This builds on an earlier decision by National Cabinet to ensure Scheme sustainability and achieve an 8 per cent growth target by 1 July 2026, with further moderation as the NDIS matures.

Improving employment for people with disability

A $227.6 million investment will support a new specialised disability employment program to replace the existing Disability Employment Services program by 1 July 2025. This includes a modern digital platform for providers and participants. These reforms will support more people with disability into sustainable work, through a program with greater flexibility, increased individual supports, and better service quality. Eligibility will be expanded to include volunteers outside the income support system and those with less than eight hours per week work capacity.

Delivering essential services

Investing in reliability and security

Strengthening resourcing for Services Australia

The Government is delivering safer and more efficient government services for all Australians.

A $1.8 billion provision will support delivery of customer and payment services. This includes funding for frontline and service delivery staff to manage claims, respond to natural disasters and improve the cyber security environment. The Government is providing $314.1 million over two years to strengthen safety and security at Services Australia centres.

The Government is investing $580.3 million over four years and $139.6 million per year ongoing to sustain the myGov platform and identify potential enhancements. A further $50 million will improve usability, safety and security of the myGov platform and ensure Services Australia can support people to protect their information and privacy.

Strengthening the Australian Taxation Office (ATO) against fraud

There will be $187.4 million to better protect taxpayer data and Commonwealth revenue against fraudulent attacks on the tax and superannuation systems. Funding will upgrade the ATO’s information and communications technologies and increase fraud prevention capabilities to manage increasing risk, prevent revenue loss, and support victims of fraud and cyber crime.

Looking after our veterans

Veterans’ claims processing is prioritised with an additional $186 million for staffing resources and $8.4 million to improve case management and protect against cyber risk. The Government will provide $222 million to harmonise veterans’ compensation and rehabilitation legislation.

A further $48.4 million will be available for Veterans’ Home Care and Community Nursing programs and $10.2 million to provide access to funded medical treatment for ill and injured veterans while their claims for liability are processed.

Back to top

  • Introduction
  • Article Information

Results are from logistic regression models controlling for age, Hispanic or Latina/x ethnicity, marital status, parity, tobacco use, prenatal visit utilization, stillbirth, and placental abruption. Other race includes Alaska Native, American Indian, Chinese, Filipino, Guam/Chamorro Hawaiian, Indian, Japanese, Korean, Other Asian/Pacific Islander, Samoan, and Vietnamese. In the sample, 4100 patients had a history of substance use, and 33 760 had no history of substance use; 4636 had a urine toxicology test, and 2199 had any positive test result at labor and delivery. Error bars indicate 95% CIs.

Data Sharing Statement

See More About

Select your interests.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing

Others Also Liked

  • Download PDF
  • X Facebook More LinkedIn

Jarlenski M , Shroff J , Terplan M , Roberts SCM , Brown-Podgorski B , Krans EE. Association of Race With Urine Toxicology Testing Among Pregnant Patients During Labor and Delivery. JAMA Health Forum. 2023;4(4):e230441. doi:10.1001/jamahealthforum.2023.0441

Manage citations:

© 2024

  • Permissions

Association of Race With Urine Toxicology Testing Among Pregnant Patients During Labor and Delivery

  • 1 Department of Health Policy and Management, University of Pittsburgh School of Public Health, Pittsburgh, Pennsylvania
  • 2 Friends Research Institute, Baltimore, Maryland
  • 3 Department of Obstetrics, Gynecology, and Reproductive Sciences, University of California, San Francisco
  • 4 Department of Obstetrics, Gynecology & Reproductive Sciences, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania
  • 5 Magee-Womens Research Institute, Pittsburgh, Pennsylvania

An estimated 16% of pregnant persons in the US use alcohol (10%) or an illicit substance (6%, including cannabis). 1 Urine toxicology testing (UTT) is often performed at the time of labor and delivery for pregnant patients to evaluate substance use. 2 , 3 We sought to elucidate associations between race and receipt of UTT and a positive test result among pregnant patients admitted to the hospital for delivery.

This cohort study followed the STROBE reporting guideline. Data were extracted from electronic medical records (EMRs) of patients with a live or stillbirth delivery between March 2018 and June 2021 in a large health care system in Pennsylvania. The study was approved by the University of Pittsburgh institutional review board. Informed consent was waived because the research constituted minimal risk. All patients presenting for delivery were verbally screened for substance use using questions adapted from the National Institute on Drug Abuse Quick Screen. 4 Policy specified UTT would be performed for those with a positive screen result, history of substance use in the year prior to delivery, few prenatal visits, or abruption or stillbirth without a clear medical explanation.

We studied 2 binary outcomes: the receipt of UTT (point of care presumptive testing) and a positive test result at delivery. The primary variable of interest, patient race, was conceptualized as a social construct that could manifest in biased or discriminatory delivery of health care. Self-reported race was categorized as Black, White, and other (Alaska Native, American Indian, Chinese, Filipino, Guam/Chamorro Hawaiian, Indian, Japanese, Korean, Other Asian/Pacific Islander, Samoan, and Vietnamese). Substance use history was defined as having a diagnosis of an alcohol, cannabis, opioid, or stimulant use or disorder during pregnancy in the EMR within 1 year prior through delivery. A positive UTT result was defined as at least 1 positive result of a test component, including amphetamines, barbiturates, benzodiazepines, buprenorphine, cocaine, cannabis, methadone, opiates, or phencyclidine. We used multivariable logistic regression models including race and substance use history, adjusting for age, Hispanic or Latina/x ethnicity, marital status, parity, tobacco use, prenatal visit utilization, stillbirth, and placental abruption. We derived mean predicted probabilities of outcomes by race and substance use history. 5 Analyses were conducted using Stata, version 17.

Among 37 860 patients (100% female; mean [SD] age, 29.8 [5.5] years), 16% Black, 76% were White, and 8% were other race ( Table ). Overall, 11% had a history of substance use; opioid use was more common among White patients (40% of all substance use), whereas cannabis use was most common among Black patients (86% of all substance use). The mean predicted probability of having a UTT at delivery was highest among Black patients compared with White patients and other racial groups regardless of history of substance use ( Figure ). For Black patients without a history of substance use, the mean predicted probability of receiving a UTT at delivery was 6.9% (95% CI, 6.4%-7.4%) vs 4.7% (95% CI, 4.4%-4.9%) among White patients. Among Black patients with a history of substance use, the mean predicted probability of receiving a UTT at delivery was 76.4% (95% CI, 74.8%-78.0%) vs 68.7% (95% CI, 67.3%-70.1%) among White patients. In contrast, among those with a history of substance use, the mean predicted probability of having a positive test result was 66.7% (95% CI, 64.8%-68.7%) among White patients and 58.3% (95% CI, 55.5%-61.1%) among Black patients.

In this cohort study, Black patients, regardless of history of substance use, had a greater probability of receiving a UTT at delivery compared with White patients and other racial groups. However, Black patients did not have a higher probability of a positive test result than other racial groups. Limitations of the study include a lack of a sufficient sample size to investigate other racial and ethnic minoritized groups, such as Alaska Native and American Indian patients, and that data were from a single geographical area and may not generalize nationally. To address racial biases, health care systems should examine drug testing practices and adhere to evidence-based practices.

Accepted for Publication: February 4, 2023.

Published: April 14, 2023. doi:10.1001/jamahealthforum.2023.0441

Open Access: This is an open access article distributed under the terms of the CC-BY License . © 2023 Jarlenski M et al. JAMA Health Forum .

Corresponding Author: Marian Jarlenski, PhD, MPH, University of Pittsburgh School of Public Health, 130 DeSoto St, A619, Pittsburgh, PA 15261 ( [email protected] ).

Author Contributions: Dr Jarlenski and Mr Shroff had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.

Concept and design: Jarlenski, Terplan, Krans.

Acquisition, analysis, or interpretation of data: All authors.

Drafting of the manuscript: Jarlenski, Krans.

Critical revision of the manuscript for important intellectual content: All authors.

Statistical analysis: Shroff, Terplan, Brown-Podgorski, Krans.

Obtained funding: Jarlenski, Krans.

Administrative, technical, or material support: Krans.

Supervision: Jarlenski, Krans.

Conflict of Interest Disclosures: Dr Roberts reported receiving grants from the Foundation for Opioid Response Efforts and the University of California, San Francisco CSF Bixby Center for Global Reproductive Health and National Center of Excellence in Women's Health outside the submitted work. Dr Krans reported receiving grants from the National Institutes of Health, Merck, and Gilead outside the submitted work. No other disclosures were reported.

Funding/Support: This work was supported by grant R01DA049759 from the National Institute on Drug Abuse (Dr Jarlenski and Krans).

Role of the Funder/Sponsor: The National Institute on Drug Abuse had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Data Sharing Statement: See Supplement .

  • International
  • Short Learning

The Right to Vote - 2024 SA Elections

#ElectionsCountdown: Active citizenship needed to unlock South Africa’s potential

Next Gen Sequencing

The UFS-NGS Unit acquires state-of-the-art technology for cutting-edge research

Ibrahim Mahama

Ibrahim Mahama engages with future artists during UFS visit

UFS Health Sciences Closing Date Application

Apply here now

Global Citizen Home Page Spotlight

Join the Africa Month celebrations

UFS - Thought-Leader Webinar

2024 Elections: What the Future Holds After 29 May

NextSeq 2000 platform launched

The Next Generation Sequencing (UFS-NGS) Unit at the University of the Free State (UFS) – in a bid to advance genomics initiatives, training, and services – recently acquired new state-of-the-art technology with a wide range of advantages, including educational enhancement and cutting-edge research on food security and healthcare applications and impact.

The NextSeq 2000 platform boasts a higher capacity of up to 350 and 550 gigabytes of data on the P3 and the recently released P4 flow cell, respectively, with a unique chemistry coupled with a variety of built-in applications. Its flexibility will allow the university to cater for both state-funded research projects and private collaborations, positioning the institution as a hub for cutting-edge genomics research and innovation.

Prof Martin Nyaga, Head of the UFS-NGS Unit and Associate Professor in the Division of Virology , says acquiring such a next-level genomic instrument is a game changer and a step forward in the unit’s agenda to advance genomic research initiatives, produce quality research data, and add to our vision of becoming a reputable genomic hub in Africa.

The NextSeq 2000 platform was launched by Prof Vasu Reddy , Vice-Chancellor: Research and Internationalisation, Prof Gert van Zyl , Dean: Faculty of Health Sciences, and Dr Glen Taylor , Senior Director: Directorate Research Development (DRD) , on 8 April 2024.

Hands-on experience for researchers and students 

“Over the years, the UFS-NGS Unit has generated several thousands of whole genome/targeted sequences and metagenomic data sets of various pathogens, including rotaviruses and coronaviruses, among others, using the Illumina MiSeq. The NextSeq 2000 will not only be a research tool, but also an educational asset.

“This will serve to provide students and researchers with hands-on experience in genomics and molecular biology, equipping them with practical skills necessary for careers in various scientific and medical, biomedical, and agricultural disciplines that are relevant both nationally and globally. It will further enable collaborative genomic research initiatives among researchers at the UFS and globally, especially in Africa,” says Prof Nyaga.

The UFS-NGS Unit team, which previously operated on a MiSeq™ sequencing platform, has already received hands-on training from the Illumina channel partner in South Africa, SEPARATIONS – led by Drs Justin Mills and Natasha Kitchin – which was a huge success. Several successful runs have since been sequenced using the P3 flow cells, generating more than 350 gigabytes of metagenomics data through this robust platform.

Critical for research quality and diagnostic precision

Acquiring this instrument, explains Prof Nyaga, was a sensible decision for the university, particularly given the existing familiarity with Illumina technology through the MiSeq platform.

“Its versatility caters to diverse research applications aligning with varied research needs, including but not limited to agricultural and biomedical sciences, vaccine efficacy studies, human cancer research, and the studies on transmittable diseases. Its high-throughput sequencing capabilities enable researchers to delve deep into the genetic makeup of tumour samples and identify critical mutations and markers linked to cancer development.”

Prof Nyaga says this knowledge is vital for tailoring personalised treatments and gaining insights into the underlying mechanisms of diseases, ultimately enhancing patient outcomes and, where applied in agricultural sciences, enhances food security.

Moreover, the NextSeq 2000 data accuracy ensures the reliability of genetic analyses, which is critical to maintaining research quality and diagnostic precision, and its scalability future-proofs the institution’s capacity as our research endeavours grow. It empowers both the faculties of Health Sciences, and Natural and Agricultural Sciences, as well as the broader research network, to conduct groundbreaking studies and publish high-impact research papers.

This, in turn, will attract top-tier talent and multiple funding opportunities. The NextSeq 2000's ability to sequence rare and underrepresented clinical samples from the Free State and beyond, will enable research to address health disparities, enhance diagnostics for region-specific diseases, and contribute to a more equitable health-care system. This aligns seamlessly with Vision 130 to champion global health initiatives and foster societal well-being.

Additionally, routine diagnostics can stand to benefit significantly from the NextSeq 2000's capabilities. Its high-throughput sequencing and enhanced accuracy can streamline diagnostic processes, reducing patient waiting times and improving diagnostic precision. The result is better patient care and treatment outcomes.

• The UFS-NGS Unit organises annual data and bioinformatics workshops to enhance usage of genomics data produced by the sequencing platforms available in the UFS-NGS Unit. Researchers applying these techniques are encouraged to click here and apply for the 2024 bioinformatics workshop and a genomics clinical forum .

Economic and Management Sciences

Health Sciences

The Humanities

Natural and Agricultural Sciences

Open Distance Learning

Theology and Religion

Business School

We use cookies to make interactions with our websites and services easy and meaningful. To better understand how they are used, read more about the UFS cookie policy . By continuing to use this site you are giving us your consent to do this.

Quick Links

© 2024 - UFS, Bloemfontein  Terms and Conditions  Cookie Policy   Privacy Notice  

Back To Top

© 2024 - UFS, Bloemfontein   Terms and Conditions   Cookie Policy   Privacy Notice  

Terms and Conditions   Cookie Policy   Privacy Notice   © 2024 - UFS, Bloemfontein

IMAGES

  1. (PDF) IT software quality management

    quality research papers

  2. A Comprehensive Guide to the Key Components of a Quality Research Paper

    quality research papers

  3. How to Write a High Quality Research Paper 2023

    quality research papers

  4. Writing Quality Research Papers : Brief Guidelines to enhance the

    quality research papers

  5. 🔥 Quality research papers. Quality Research Paper Examples That Really

    quality research papers

  6. (DOC) groundwater quality assessment

    quality research papers

VIDEO

  1. FDP on "The Art of Writing Quality Research Papers an Scientific Thesis Writing"

  2. Introducing Keenious

  3. Common Types of Research Papers for Publication

  4. Tips for Effective Research and Writing

  5. How to Write High Quality Research Papers ?

  6. Free Class on Research Methodology

COMMENTS

  1. Full article: Quality 2030: quality management for the future

    The paper is also an attempt to initiate research for the emerging 2030 agenda for QM, here referred to as 'Quality 2030'. This article is based on extensive data gathered during a workshop process conducted in two main steps: (1) a collaborative brainstorming workshop with 22 researchers and practitioners (spring 2019) and (2) an ...

  2. Research quality: What it is, and how to achieve it

    2.1. What research quality historically is. Research assessment plays an essential role in academic appointments, in annual performance reviews, in promotions, and in national research assessment exercises such as the Excellence in Research for Australia (ERA), the Research Excellence Framework (REF) in the United Kingdom, the Standard ...

  3. Criteria for Good Qualitative Research: A Comprehensive Review

    Fundamental Criteria: General Research Quality. Various researchers have put forward criteria for evaluating qualitative research, which have been summarized in Table 3.Also, the criteria outlined in Table 4 effectively deliver the various approaches to evaluate and assess the quality of qualitative work. The entries in Table 4 are based on Tracy's "Eight big‐tent criteria for excellent ...

  4. Defining and assessing research quality in a transdisciplinary context

    Effective research quality criteria are necessary to guide the funding, management, ongoing development, and advancement of research methods, projects, and programs. ... These papers recognize the need for new quality measures to encourage and promote high-quality research and to overcome perceived biases against TDR approaches in research ...

  5. Quality in Research: Asking the Right Question

    This column is about research questions, the beginning of the researcher's process. For the reader, the question driving the researcher's inquiry is the first place to start when examining the quality of their work because if the question is flawed, the quality of the methods and soundness of the researchers' thinking does not matter. The ...

  6. What is Good Qualitative Research?

    good qualitative research and one of the most common omissions in qualitative articles. If the sample is representing the themes around an issue using theoretical sampling, cases will be collected until issues are felt to be 'theoretically saturated'; i.e. no new relevant data seem to emerge (Strauss & Corbin, 1990).

  7. Citations, Citation Indicators, and Research Quality: An Overview of

    Dag W. Aksnes is research professor at the Nordic Institute for studies in Innovation, Research and Education (NIFU) and affiliated with the Centre for Research Quality and Policy Impact Studies (R-QUEST). Aksnes' research covers various topics within the field of bibliometrics, such as studies of citations, citation analyses and assessments ...

  8. Editorial: How to develop a quality research article and avoid a

    1. Introduction. The process of concluding that a submitted research article does not align with the specific journals' aims and scope or meets the required rigour and quality, is commonly termed - desk rejection.This element of the review process is normally performed by the journal Editor directly or by the team of specialist sub-Editors.

  9. Assessing the quality of research

    Figure 1. Go to: Systematic reviews of research are always preferred. Go to: Level alone should not be used to grade evidence. Other design elements, such as the validity of measurements and blinding of outcome assessments. Quality of the conduct of the study, such as loss to follow up and success of blinding.

  10. Quality Research Papers : For Students of Religion and Theology

    Nancy Vyhmeister's Quality Research Papers is fast becoming a standard reference textbook for writing research papers in the field of religion and theology. It takes the student from the beginning assignment of a paper through the research phase to the finished paper. This second edition gives improvements and added material for such things as the expanding field of online research and doing ...

  11. Quality assurance of qualitative research: a review of the discourse

    Among the 37 papers included in the review, two dominant narratives were interpreted from the literature, reflecting contrasting approaches to quality assurance. The first focuses on demonstrating quality within research outputs; the second focuses on principles for quality practice throughout the research process.

  12. Quality Research Papers: For Students of Religion and Theology

    Nancy Vyhmeister's Quality Research Papers is fast becoming a standard reference textbook for writing research papers in the field of religion and theology. It takes the student from the beginning assignment of a paper through the research phase to the finished paper. This second edition gives improvements and added material for such things as the expanding field of online research and doing ...

  13. How do you determine the quality of a journal article?

    1. Where is the article published? The journal (academic publication) where the article is published says something about the quality of the article. Journals are ranked in the Journal Quality List (JQL). If the journal you used is ranked at the top of your professional field in the JQL, then you can assume that the quality of the article is high.

  14. (PDF) Writing Quality Research Papers

    This book is about a thorough understanding of the essentials and the way to write quality research papers. It explores the techniques and standard sentence formation along with grammar tenses for ...

  15. quality assurance Latest Research Papers

    The researchers revealed from the interviews that there is total absence of this practice which is directed in national guidelines and tools for Quality Assurance Scheme (QAS) for medical colleges of Bangladesh. Bangladesh Journal of Medical Education Vol.13 (1) January 2022: 33-39. Download Full-text.

  16. Quality Research Papers: For Students of Religion and Theology

    This updated third edition of Quality Research Papers—fast becoming a standard reference textbook for writing research papers in the fields of religion and theology—gives improvements and added material for such things as the expanding field of online research and doing church-related research in a professional manner.Because so many new developments have taken place in the field of ...

  17. (PDF) Assessment of Research Quality

    This paper considers assessment of research quality by focusing on definition and. solution of research problems. W e develop and discuss, across different classes of. problems, a set of general ...

  18. Quality Research Papers: For Students of Religion and Theology

    Quality Research Papers will guide students through an overabundance of online and library resources and help them craft excellent essays. Read more Report an issue with this product or seller. Previous page. Print length. 352 pages. Language. English. Publisher. Zondervan Academic. Publication date. May 19, 2020. Dimensions.

  19. Evaluating the quality of scientific research papers in

    The study aims to find the quality of research papers published in the domain of entrepreneurship in India. This study covers 100 research papers. A standardized measurement tool developed by the earlier researchers was used to evaluate the research quality. The data compiled using the measurement tool were analyzed with the support of the SPSS. The statistical tools such as descriptive ...

  20. The dual dimension of scientific research experience ...

    Scientific experience is crucial for producing high-quality research, and the approach of acquisition can significantly impact its accumulation rate. We present a framework for scientific experience acquisition that outlines the dual dimensions of experience accumulation: self-accumulation and accumulation under senior expert guidance. To validate the framework, we conducted a case study using ...

  21. (PDF) An Introduction to Water Quality Analysis

    Water quality analysis is required mainly for monitoring. purpose. Some importance of such assessment includes: (i) To check whether the water quality is in compliance. with the standards, and ...

  22. Research on CC-SSBLS Model-Based Air Quality Index Prediction

    Establishing reliable and effective prediction models is a major research priority for air quality parameter monitoring and prediction and is utilized extensively in numerous fields. The sample dataset of air quality metrics often established has missing data and outliers because of certain uncontrollable causes. A broad learning system based on a semi-supervised mechanism is built to address ...

  23. The Effect of Sales Promotion, Product Quality, and E-Word Of ...

    This research aims to determine the influence of sales promotion, product quality, and e-word of mouth on Shopee Live on impulsive buying behavior among Muhammadiyah University Sidoarjo students. The population in this study is all students at the Muhammadiyah University of Sidoarjo who regularly make purchases on Shopee Live.

  24. Quality Research Papers: For Students of Religion and Theology

    Nancy Vyhmeister's Quality Research Papers is fast becoming a standard reference textbook for writing research papers in the field of religion and theology. It takes the student from the beginning assignment of a paper through the research phase to the finished paper. This second edition gives improvements and added material for such things as ...

  25. How to Write a White Paper in 10 Steps (+ Tips & Templates)

    Here are several ways to conduct in-depth research for your white paper: Find credible sources. While researching the content for your white paper, keep a record of all the sources you use. You can add these sources to your white paper in any of the following ways: As quotes inside the content; As a list of references at the end of the document

  26. Strengthening Medicare and the care economy

    Providing quality care. The Budget provides $2.2 billion to deliver aged care reforms and continue implementing recommendations from the Royal Commission into Aged Care Quality and Safety. The new Aged Care Act will put the rights and needs of older people at the centre of the aged care system.

  27. Association of Race With Urine Toxicology Testing Among Pregnant

    Data were extracted from electronic medical records (EMRs) of patients with a live or stillbirth delivery between March 2018 and June 2021 in a large health care system in Pennsylvania. The study was approved by the University of Pittsburgh institutional review board. Informed consent was waived because the research constituted minimal risk.

  28. News Archive Item

    The Next Generation Sequencing (UFS-NGS) Unit at the University of the Free State (UFS) - in a bid to advance genomics initiatives, training, and services - recently acquired new state-of-the-art technology with a wide range of advantages, including educational enhancement and cutting-edge research on food security and healthcare applications and impact.The NextSeq 2000 platform boasts a ...