Sep 30, 2023 · There is considerable literature showing the complexity, connectivity and blurring of 'qualitative' and 'quantitative' methods in research. Yet these concepts are often represented in a binary way as independent dichotomous categories. This is evident in many key textbooks which are used in research methods courses to guide students and newer researchers in their research training. This paper ... ... May 24, 2018 · Although the existence of multiple approaches is a powerful source in the development of a research design, new public administration (PA) researchers and students may see it as a source of confusion because there is a lack of clarity in the literature about the approaches to research design, research methods, and research methodology in the ... ... Abstract. This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions, and focussing on intervention improvement. ... DATA COLLECTION METHODS. Survey research may use a variety of data collection methods with the most common being questionnaires and interviews. Questionnaires may be self-administered or administered by a professional, may be administered individually or in a group, and typically include a series of items reflecting the research aims. ... Oct 11, 2016 · Background Overviews of methods are potentially useful means to increase clarity and enhance collective understanding of specific methods topics that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness. This type of review represents a distinct literature synthesis method, although to date, its methodology remains relatively undeveloped despite several aspects that ... ... Sep 30, 2020 · This paper proposes a research plan to investigate the research methods issues (i.e. research design, sampling methods, data collection methods, data analysis techniques, measurement scales, and ... ... Research Methodology is science of studying how research is done scientifically. A way to systematically solve the research problem by logically adopting various steps. Methodology helps to understand not only the products of scientific inquiry but the process itself. Research Methodology aims to describe and analyze methods, throw ... A description of mixed methods as a research design is presented below. 3. Mixed Methods as a Research Methodology A mixed-methods approach is a research methodology in its own right. As stated by Creswell and Plano Clark (2011), a mixed-methods research design is a research design that has its own philosophical assumptions and methods of inquiry. ... Mar 19, 2020 · Firstly, with regard to the research methods used, our results show that researchers are more likely to use quantitative research methods (90.22%) compared to all other research methods. Qualitative research was the second most common research method but only made up about 4.79% of the general method usage. ... Jun 23, 2023 · Sage Research Methods & Evaluation is at the forefront of research and scholarship, providing classic and cutting-edge books, video collections, reference materials, cases built on real research, key journals, and our online platform for the community Methodspace. Download special issues, collections, and a selection of most read articles. ... ">

'Qualitative' and 'quantitative' methods and approaches across subject fields: implications for research values, assumptions, and practices

  • Open access
  • Published: 30 September 2023
  • Volume 58 , pages 2357–2387, ( 2024 )

Cite this article

You have full access to this open access article

scholarly articles about research methods

  • Nick Pilcher   ORCID: orcid.org/0000-0002-5093-9345 1 &
  • Martin Cortazzi 2  

28k Accesses

10 Citations

Explore all metrics

There is considerable literature showing the complexity, connectivity and blurring of 'qualitative' and 'quantitative' methods in research. Yet these concepts are often represented in a binary way as independent dichotomous categories. This is evident in many key textbooks which are used in research methods courses to guide students and newer researchers in their research training. This paper analyses such textbook representations of 'qualitative' and 'quantitative' in 25 key resources published in English (supported by an outline survey of 23 textbooks written in German, Spanish and French). We then compare these with the perceptions, gathered through semi-structured interviews, of university researchers (n = 31) who work in a wide range of arts and science disciplines. The analysis of what the textbooks say compared to what the participants report they do in their practice shows some common features, as might be assumed, but there are significant contrasts and contradictions. The differences tend to align with some other recent literature to underline the complexity and connectivity associated with the terms. We suggest ways in which future research methods courses and newer researchers could question and positively deconstruct such binary representations in order to free up directions for research in practice, so that investigations can use both quantitative or qualitative approaches in more nuanced practices that are appropriate to the specific field and given context of investigations.

Similar content being viewed by others

scholarly articles about research methods

Designing a Research Question

scholarly articles about research methods

“Qualitative Research” Is a Moving Target

scholarly articles about research methods

Mixed Methods

Avoid common mistakes on your manuscript.

1 Introduction: qualitative and quantitative methods, presentations, and practices

Teaching in research methods courses for undergraduates, postgraduates and newer researchers is commonly supported or guided through textbooks with explanations of 'qualitative' and 'quantitative' methods and cases of how these methods are employed. Student dissertations and theses commonly include methodology chapters closely aligned with these textbook representations. Unexceptionally, dissertations and theses we supervise and examine internationally have methodology chapters and frequently these consider rationales and methods associated with positivist or interpretivist paradigms. Within such positivist or interpretivist frameworks, research approaches are amplified with elaborations of the rationale, the methods, and reasons for their choice over likely alternatives. In an apparent convention, related data are assigned as quantitative or qualitative in nature, with associated labelling as ‘numerical’ or ‘textual'. The different types of data yield different values and interpretive directions, and are clustered conceptually with particular research traditions, approaches, and fields or disciplines. Frequently, these clusters are oriented around 'quantitative' and 'qualitative' conceptualizations.

This paper seeks to show how ‘qualitative’ and ‘quantitative’, whether stereotyped or more nuanced, as binary divisions as presented in textbooks and published resources describing research methods may not always accord with the perceptions and day-to-day practices of university researchers. Such common binary representations of quantitative and qualitative and their associated concepts may hide complexities, some of which are outlined below. Any binary divide between ‘qualitative’ and ‘quantitative’ needs caution to show complexity and awareness of disparities with some researchers’ practices.

To date, as far as the present authors are aware, no study has first identified a range of binary representations of ‘quantitative’ and ‘qualitative’ methods and approaches in a literature review study of the many research methods textbooks and sources which guide students and then, secondly, undertaken an interview study with a range of established participant researchers in widely divergent fields to seek their understandings of ‘quantitative’ and ‘qualitative’ in their own fields. The findings related here complement and extend the complexities and convergences of understanding the concepts in different disciplines. Arguably, this paper demonstrates how students and novice researchers should not be constrained in their studies by any binary representations of ‘quantitative’ and ‘qualitative’ the terms. They should feel free to use either (or neither) or both in strategic combinations, as appropriate to their fields.

1.1 Presentations

Characteristically, presentations in research methods textbooks distinguish postivist and interpretivist approaches or paradigms (e.g. Guba and Lincoln 1994 ; Howe 1988 ; Denzin and Lincoln 2011 ) or ‘two cultures’ (Goertz and Mahoney 2012 ) with associated debates or ‘wars’ (e.g. Creswell 1995 ; Morse 1991 ). Quantitative data are shown as ‘numbers’ gathered through experiments (Moore 2006 ) or mathematical models (Denzin and Lincoln 1998 ), whereas qualitative data are usually words or texts (Punch 2005 ; Goertz and Mahoney 2012 ), characteristically gathered through interviews or life stories (Denzin and Lincoln 2011 ). Regarding analysis, some sources claim that establishing objective causal relationships is key in quantitative analysis (e.g. Goertz and Mahoney 2012 ) whereas qualitative analysis uses more discursive and interpretative procedures.

Thus, much literature presents research in terms of two generally distinct methods—quantitative and qualitative—which many students are taught in research methods courses. The binary divide may seem to be legitimated in the titles of many academic journals. This division prevails as designated strands of separated research methods in courses which apparently handle both (cf. Onwuegbuzie and Leech 2005 ). Consequently, students may follow this seemingly stereotyped binary view or feel uncomfortable to deviate from it. Arguably, PhD candidates need to demonstrate understanding of such concepts and procedures in a viva—or risk failure (cf. Trafford and Leshem 2002 ). The Cambridge Dictionary defines ‘quality’ as “how good or bad something is”; while ‘quantity' is “the amount or number of something, especially that can be measured” (Cambridge 2022 ). But definitions of ‘Qualitative' can be elusive, since “a precise definition of qualitative research, and specifically… its distinctive feature of being “qualitative”, the literature is meager” (Aspers and Corte 2019 , p.139). Some observe a “paradox… that researchers act as if they know what it is, but they cannot formulate a definition” and that “there is no consensus about specific qualitative methods nor… data” (Aspers and Corte 2019 , p40). In general, ‘qualitative research’ is an iterative process to discover more about a phenomenon (ibid.). Elsewhere, 'qualitative’ is defined negatively: "It is research that does not use numbers” (Seale 1999b , p.119). But this oversimplifies and hides possible disciplinary variation. For example, when investigating criminal action, numeric information (quantity) always follows an interpretation (De Gregorio 2014 ), and consequently this is a quantity of a quality (cf. Uher 2022 ).

Indeed, many authorities note the presence of elements of one in the other. For example, in analysis specifically, that what are considered to be quantitative elements such as statistics are used in qualitative analysis (Miles and Huberman 1994 ). More generically, that “a qualitative dimension is present in quantitative work as well” (Aspers and Corte 2019 , p.139). In ‘mixed methods’ research (cf. Tashakkori et al. 1998 ; Johnson et al. 2007 ; Teddlie and Tashakkori 2011 ) many researchers ‘mix’ the two approaches (Seale 1999a ; Mason 2006 ; Dawson 2019 ), either using multiple methods concurrently, or doing so sequentially. Mixed method research logically depends on prior understandings of quantitative and qualitative concepts but this is not always obvious (e.g. De Gregorio 2014 ); for instance Heyvaert et al. ( 2013 ) define mixed methods as combining quantitative and qualitative items, but these key terms are left undefined. Some commentators characterize such mixing as a skin, not a sweater to be changed every day (Marsh and Furlong 2002 , cited in Grix 2004 ). In some disciplines, these terms are often blurred, interchanged or conjoined. In sociology, for instance, “any quality can be quantified. Any quantity is a quality of a social context, quantity versus quality is therefore not a separation” (Hanson 2008 , p.102) and characterizing quantitative as ‘objective’ and qualitative as ‘subjective’ is held to be false when seeking triangulation (Hanson 2008 ). Additionally, approaches to measuring and generating quantitative numerical information can differ in social sciences compared to physics (Uher 2022 ). Indeed, quantity may consist of ‘a multitude’ of divisible aspects and a ‘magnitude’ for indivisible aspects (Uher 2022 ). Notably, “the terms ‘measurement’ and ‘quantification’ have different meanings and are therefore prone to jingle-jangle fallacies” (Uher 2022 ) where individuals use the same words to denote different understandings (cf. Bakhtin 1986 ). Comparatively, the words ‘unit’ and ‘scale’ are multitudinous in different sciences, and the key principles of numerical traceability and data generation traceability arguably need to be applied more to social sciences and psychology (Uher 2022 ). The interdependence of the terms means any quantity is grounded in a quality of something, even if the inverse does not always apply (Uher 2022 ).

1.2 Practices

The present paper compares representations found in research methods textbooks with the reported practices of established researchers given in semi-structured interviews. The differences revealed between what the literature review of methods texts showed and what the interview study showed both underlines and extends this complexity, with implications for how such methodologies are approached and taught. The interview study data (analysed below) show that many participant researchers in disciplines commonly located within an ostensibly ‘positivist’ scientific tradition (e.g. chemistry) are, in fact, using qualitative methods as scientific procedures (contra Tashakkori et al 1998 ; Guba and Lincoln 1994 ; Howe 1988 ; Lincoln and Guba 1985 ; Teddlie and Tashakkori 2011 ; Creswell 1995 ; Morse 1991 ). These interview study data also show that many participant researchers use what they describe as qualitative approaches to provide initial measurements (geotechnics; chemistry) of phenomena before later using quantitative procedures to measure the quantity of a quality (cf. Uher 2022 ). Some participant researchers also say they use quantitative procedures to reveal data for which they subsequently use qualitative approaches to interpret and understand (biology; dendrology) through their creative imaginations or experience (contra e.g. Hammersley, 2013 ). Participant researchers in ostensibly ‘positivist’ areas describe themselves as doubting ‘facts’ measured by machines programmed by humans (thus showing they feel researchers are not outside the world looking in (contra. e.g. Punch 2005 )) or doubting the certainty of quantitative data over time (contra e.g. Punch 2005 ). Critically, the interview study data show that these participant researchers often engage in debate over what a ‘number’ is and the extent to which ‘numbers’ can be considered ‘quantitative’. For example the data show how a mathematician considers that many individuals do not know what they mean by the word ‘quantitative’, and an engineer interprets any numbers involving human judgements as ‘qualitative’. Further, both a chemist and a geotechnician routinely define and use ‘qualitative’ methods and analysis to arrive at numerical values (contra. Davies and Hughes 2014 ; Denzin and Lincoln 2011 ).

Such data refute many textbook and key source representations of quantitative and qualitative as being binary and separately ringfenced entities as shown in the literature review study below (contra e.g. Punch 2005 ; Goertz and Mahoney 2012 ). Nevertheless, they resonate with much recent and current literature in the field (e.g. Uher 2022 ; De Gregorio 2014 ). They also arguably extend the complexities of the terms and approaches. In some disciplines, these participant researchers only do a particular type of research and never need anything other than clear ‘quantitative’ definitions (Mathematics), and some only ever conduct research involving text and never numbers (Literature). Moreover, some participant researchers consider certain aspects lie outside the ‘qualitative’ or ‘quantitative’ (the theoretical in German Literature), or do research which they maintain does not contain ‘knowledge’ (Fine-Art Sculpture), while others outline how they feel they do foundational conceptual research which they believe comes at a stage before any quantity or quality can be assessed (Philosophy). Indeed, of the 31 participant researchers we spoke to, nine of them considered the terms ‘quantitative’ and ‘qualitative’ to be of little relevance for their subject.

1.3 Outline of the two studies

This paper reports and discusses findings from a constructivist grounded approach interview study that interviewed experienced participant researchers (N = 31) in various disciplines (see Table 1 below) about their understandings of ‘qualitative’ and ‘quantitative’ in their subject areas. Findings from this interview study were compared with findings from a research methods literature review study that revealed many disparities with received and often binary presentations of the concepts in much key literature that informs student research methods courses. In this section we outline the review criteria, the method of analysis, and our findings. The findings are grouped according to how the sources reviewed consider ‘quantitative’ and ‘qualitative’ approaches the aspects of positivism and constructivism; the nature of research questions; research methods; analysis; issues of reliability, validity and generalizability; and the value and worth of the different approaches. Following this. We outline the approach, method, and procedure adopted for the interviews with research participants; sampling and saturation; and analysis; beside details of the participant researchers. Subsequently, Theme 2 focuses on contrasts of the interview data with ‘binary’ textbook and key source representations. Theme 3 focuses on what the interview data show about participant researcher perceptions of the value of ‘quantitative’ and ‘qualitative’ methods and approaches. This section outlines where, how, and sometimes why, participant researchers considered ‘quantitative’ and ‘qualitative’ methods approaches to be (or to not be) useful to them. These interview study findings show a surprising range of understandings, usage, and often perceived irrelevance of the terms. In the Discussion section, these findings form the focus of comparison with the literature as well as a consideration of possible implications for approaching and teaching research methods. In the conclusion we summarise the implications for research methods courses, for researchers in different disciplines and interdisciplinary contexts and discuss limitations and suggest future research. Besides adding to the debate on how ‘quantitative’ and ‘qualitative’ are conceptualized and how they are related, the paper appeals to those delivering research methods courses and to novice researchers to consider the concepts as highly complex and overlapping, to loosen constraints, and elaborate nuances of the commonplace binary representations of the terms.

2 Literature review study: some key textbooks and sources for teaching Research Methods.

2.1 review criteria.

To identify how concepts are presented in key materials we undertook a literature review study by consulting research methods course reading lists, library search engines, physically available shelves in institutional libraries, and Google Scholar. We wanted to encompass textbooks and some key texts which are recommended to UG, PG Masters and PhD students., for example, ‘textbooks’ like ‘Doing Your Research Project: A Guide for first-time researchers’ (Bell and Waters 2014 ) and ‘Introduction to Research Methods: A Practical Guide for Anyone Undertaking a Research project (5th Edition)’ (Dawson 2019 ). Such sources were frequently mentioned on reading lists and are freely available in many institutional libraries. We consulted seminal thinkers who have published widely on research methods, such as Denzin and Lincoln, or Cresswell, but we also considered texts which are likely less known such as ‘A tale of two cultures’ (Goertz and Mahoney 2012 ) and key articles such as ‘Five misunderstandings about case-study research’ (Flyvbjerg 2006 ). Students can freely find such sources, and are easily directed to them by supervisors. Although a more comprehensively robust search is possible, we nevertheless followed procedures and standard criteria for literature reviews (Atkinson et al. 2015 ).

3 Method of analysis

We assembled a total of 25 sources to look for a number of key tenets. We examined the sources for occurrence of the following: whether quantitative was described as positivist and qualitative was described as constructivist; whether quantitative was said to be science-based and qualitative was more reflective and non-science based; whether the research questions were presented as predetermined in quantitative methods and initially less focused in qualitative methods; whether quantitative methods were structured and qualitative methods were discussed as less structured; whether quantitative analysis focused on cause-effect type relationships and qualitative analysis was more exploratory; whether reliability, validity and generalizability were achieved through large numbers in quantitative research and through in-depth study in qualitative research; whether for particular subjects such as the sciences quantitative approaches were perceived to be of value (and qualitative was implied to have less value) and whether the converse was the case for other subjects such as history and anthropology; and whether mixed methods were considered possible or not possible. The 25 sources are detailed in Appendix 1 . As a confirmatory but less detailed exercise, and also detailed in Appendix 1 , we checked a further 23 research methods textbooks in German, Spanish and French, authored in those languages (rather than translations from English).

3.1 Findings

Overall, related to what quantitative and qualitative approaches, methods and analysis are, we found many key, often binary representations in this literature review. We outline these here below.

3.2 Positivism and constructivism

Firstly, 20 of the sources we reviewed stated that quantitative is considered positivist, and qualitative constructivist (e.g. Tashakkori et al 1998 ; Guba and Lincoln 1994 ; Howe 1988 ; Lincoln and Guba 1985 ; Teddlie and Tashakkori 2011 ; Creswell 1995 ; Morse 1991 ). Even if not everyone doing quantitative research (e.g. in sociology) consider themselves positivists (Marsh 1979 ), it is generally held quantitative research is positivist. Here, 12 of the sources noted that quantitative is considered ‘scientific’, situating observers outside the world looking in, e.g. through gathering numerical data (Punch 2005 ; Davis and Hughes 2014 ) whereas qualitative “locates the observer in the world” (Denzin and Lincoln 2011 , p.3). Quantitative researchers “collect facts and study the relationship of one set of facts to another”, whereas qualitative researchers “doubt whether social ‘facts’ exist and question whether a ‘scientific’ approach can be used when dealing with human beings” (Bell and Waters 2014 , p. 9).

3.3 The nature of research questions

Secondly, regarding research questions, “qualitative research… typically has… questions and methods… more general at the start, and… more focused as the study progresses” (Punch 2005 , p.28). In contrast, quantitative research uses “numerical data and typically… structured and predetermined research questions, conceptual frameworks and designs” (Punch 2005 , p.28). Of the sources we reviewed, 16 made such assertions. This understanding relates to type, and nature, of data, which is in turn anchored to particular worldviews. Punch ( 2005 , p 3–4) writes of how “in teaching about research, I find it useful to approach the qualitative-quantitative distinction primarily through…. the nature of the data. Later, the distinction can be broadened to include …. ways of conceptualising the reality being studied, and methods.” Here, the nature of data influences approach: numbers are for quantitative, and not-numbers (commonly words) for qualitative. Similarly, for Miles et al. ( 2018 ) “the nature of qualitative data” is “primarily on data in the form of words, that is, language in the form of extended text” (Miles et al. 2018 , no page). These understandings in turn relate to methods used.

Commonly, specific types of methods are said to be related to the type of approach adopted, and 18 of the sources we reviewed presented quantitative methods as being structured, and qualitative methods as less structured. For example, Davies and Hughes ( 2014 , p.23) claim “there are two principal options open to you: 1… quantitative research methods, using the traditions of science. 2… qualitative research, employing a more reflective or exploratory approach.” Here, quantitative methods are “questionnaires or structured interviews” whereas qualitative methods are “such as interviews or focus groups” (Dawson 2019 , no page given). Quantitative methods are more scientific, involve controlling a set of variables, and may involve experiments, something which, “qualitative researchers are agreed in their opposition to this definition of scientific research, or at least its application to social inquiry” (Hammersley 2013 , p. ix). As Punch notes ( 2005 , p.208), “the experiment was seen as the basis for establishing cause-effect relationships between variables, and its outcome (and control) variables had to be measured.”

4.1 Analysis

Such understandings often relate to analysis, and 16 of the sources we reviewed presented quantitative analysis as being statistical and number related, and qualitative analysis as being text based. With quantitative methods, “the data is subjected to statistical analysis, using techniques… likely to produce quantified, and, if possible, generalizable conclusions” (Bell and Waters 2014 , p.281). With qualitative research, however, this “calls for advanced skills in data management and text-driven creativity during the analysis and write-up” (Davies and Hughes 2014 ). Again, the data’s nature is key, and whilst qualitative analysis may condense data, it does not seek numbers. Indeed, “by data condensation, we do not necessarily mean quantification”, however, “occasionally, it may be helpful to convert the data into magnitudes… but this is not always necessary” (Miles et al. 2018 , npg). Qualitative analysis may involve stages such as assigning codes, subsequently sorting and sifting them, isolating patterns, then gradually refining any assertions made and comparing them to other literature (Miles et al. 2018 ). This could involve condensing, displaying, then drawing conclusions from the data (Miles et al. 2018 ). In this respect, some sources consider qualitative and quantitative analysis broadly similar in overall goals, yet different because quantitative analyses use “well-defined, familiar methods; are guided by canons; and are usually more sequential than iterative or cyclical” (Miles et al. 2018 , npg). In contrast, “qualitative researchers are… more fluid and… humanistic” in meaning making (Miles et al. 2018 , npg). Here, both approaches seek causation and may attempt to reveal ‘cause and effect’ but qualitative approaches often seek multiple and interacting influences, and effects and are less rigid (Miles et al. 2018 ). In quantitative inquiry search for causation relates to “causal mechanisms (i.e. how did X cause Y)” whereas in “the human sciences, this distinction relates to causal effects (i.e. whether X causes Y)” (Teddlie and Tashakkori 2011 , p.286). Similarly, that “scientific research in any area… seeks to trace out cause-effect relationships” (Punch 2005 , p.78). In contrast, qualitative research seeks interpretative understandings of human behaviour, “not ‘caused’ in any mechanical way, but… continually constructed and reconstructed” (Punch 2005 , p.126).

4.2 Issues of reliability, validity and generalizability

Regarding reliability, validity and generalizability, 19 of the sources we reviewed presented ideas along the lines that quantitative research is understood to seek large numbers, so quantitative researchers, “use techniques… likely to produce quantified and, if possible, generalizable conclusions (Bell and Waters 2014 , p.9). This means quantitative “research researches many more people” (Dawson 2019 , npg). Given quantitative researchers aim, “to discover answers to questions through the application of scientific procedures” it is anticipated these procedures will “increase the likelihood that the information… will be reliable and unbiased” (Davies and Hughes 2014 , p.9). Conversely, qualitative researchers are considered “more concerned to understand individuals’ perceptions of the world” (Bell and Waters 2014 , p.281) and consequently aim for in-depth data with smaller numbers, “as it is attitudes, behaviour and experiences that are important” (Dawson 2019 , npg). Consequently, generalizability of data is not key, as qualitative research has its “emphasis on a specific case, a focused and bounded phenomenon embedded in its context” (Miles et al. 2018 , npg). Yet, such research is considered generalizable in theoretical insight if not actual data (Flyvbjerg 2006 ).

4.3 The value and worth of the different approaches

Regarding ‘value’ and ‘worth’, many see this related with appropriacy to the question being researched. Thus, if questions involve more quantitative approaches, then these are of value, and if more qualitative, then these are of value, and 6 of the sources we reviewed presented these views (e.g. Bell and Waters 2014 ; Punch 2005 ; Dawson 2019 ). This resonates with disciplinary orientations where choices between given approaches are valued more in specific disciplines. History and Anthropology are seen more qualitative, whereas Economics and Epidemiology may be more quantitative (Kumar 1996 ). Qualitative approaches are valuable to study human behaviour and reveal in-depth pictures of peoples’ lived experience (e.g. Denzin and Lincoln 2011 ; Miles et al. 2018 ). Many consider there to be no real inherent superiority for one approach over another, and “asking whether quantitative or qualitative research is superior to the other is not a useful question” (Goertz and Mahoney 2012 , p.2).

Nevertheless, some give higher pragmatic value to quantitative research for studying individuals and people; neoliberal governments consistently value quantitative over qualitative research (Barone 2007 ; Bloch 2004 ; St Pierre 2004 ). Concomitantly, data produced by qualitative research is criticised by quantitative proponents “because of their problematic generalizability” (Bloor and Wood 2006 , p.179). However, other studies find quantitative researchers see qualitative methods and approaches positively (Pilcher and Cortazzi 2016 ). Some even question the qualitative/quantitative divide, and suggest “a more subtle and realistic set of distinctions that capture variation in research practice better” (Hammersley 2013 , p.99).

The above literature review study of key texts is hardly exhaustive, but shows a general outline of the binary divisions and categorizations that exist in many sources students and newer researchers encounter. Thus, despite the complex and blurred picture as outlined in the introduction above, many key texts students consult and that inform research methods courses often present a binary understanding that quantitative is positivist, focused on determining cause and effect, numerical or magnitude focused, uses experiments, and is grounded in an understanding the world can be observed from the outside in. Conversely, qualitative tends to be constructivist, focused on determining why events occur, is word or textual based (even if these elements are measured by their magnitude in a number or numerical format) and grounded in understanding the researcher is part of the world. The sciences and areas such as economics are said to tend towards the quantitative, and areas such as history and anthropology towards the qualitative.

We also note that in our literature review study we focused on English language textbooks, but we also looked at outline details, descriptions, and contents lists of texts in the languages of German, Spanish and French. We find that these broadly confirm the perception of a division between quantitative and qualitative research, and we detail a number of these in Appendix 1 . These examples are all research methods handbooks and student guides intended for under and post-graduates in social sciences and humanities; many are inter-disciplinary but some are more specifically books devoted to psychology, health care, education, politics, and management. Among the textbooks and handbooks examined in other languages, more recent books pay attention to online research and uses of the internet, social media and sometimes to big data and software for data analysis.

In these sources in languages other than English we find massive predominance of two (quantitative/qualitative) or three approaches (mixed). These are invariably introduced and examined with related theories, examples and cases in exactly that order: quantitative; qualitative; mixed. Here there is perhaps the unexamined implication that this is a historical order of research method development and also of acceptability of use (depending on research purposes). Notably, Molina Marin (2020) is oriented to Latin America and makes the point that most European writing about research methods is in English or German, while there are far fewer publications in Spanish and few with Latin American contextual relevance, which may limit epistemological perspectives. This point is evident in French and Spanish publications (much less the case in German) where bibliographic details seem dominated by English language publications (or translations from them). We now turn to outline our interview study.

5 Interview study

5.1 approach and choice of method.

We approached our interview study from a constructivist standpoint of exploring and investigating different subject specialists’ understandings of quantitative and qualitative. Critically, we were guided by the key constructivist tenet that knowledge is not independent of subjects seeking it (Olssen 1996 ), nor of subjects using it. Extending from this we considered interviews more appropriate than narratives or focus groups. Given the exploratory nature of our study, we considered interviews most suited as we wanted to have a free dialogue (cf. Bakhtin 1981 ) regarding how the terms are understood in their subject contexts as opposed to their neutral dictionary definitions (Bakhtin 1986 ), and not to focus on a specific point with many individuals. Specifically, we used ‘semi’-structured interviews. ‘Semi’ can mean both ‘half in quantity or value’ but also ‘to some extent: partly: incompletely’ (e.g. Merriam Webster 2022 ). Our interviews, following our constructionist and exploratory approach, aligned with the latter definition (see Appendix 2 for the Interview study schedule). This loose ‘semi’ structure was deliberately designed to (and did) lead to interviews directed by the participants, who themselves often specifically asked what was meant by the questions. This created a highly technical dialogue (Buber, 1947) focused on the subject.

5.2 Sampling and saturation

Our sampling combined purposive and snowball sampling (Sharma 2017 ; Levitt et al. 2018 ). Initially, participants were purposively identified by subject given the project sought to understand different subject perspectives of ‘qualitative’ and ‘quantitative.’ Later, a combined purposive and snowball sampling technique was used whereby participants interviewed were asked if they knew others teaching particular subjects. Regarding priorities for participant eligibility, this was done according to subject, although generally participants also had extensive experience (see Table 1 ). For most, English was their first language, where it was not, participants were proficient in English. The language of interview choice was English as it was most familiar to both participants and interviewer (Cortazzi et al. 2011 ).

Regarding saturation, some argue saturation occurs within 12 interviews (Guest et al. 2006 ), others within 17 (Francis et al. 2010 ). Arguably, however, saturation cannot be determined in advance of analysis and is “inescapably situated and subjective” (Braun and Clarke 2021 , p.201). This critical role of subjectivity and context guided how we approached saturation, whereby it was “operationalized in a way consistent with the research question(s) and the theoretical position and analytic framework adopted” (Saunders et al. 2018 , p.1893). We recognise that more could always be found but are satisfied that 31 participants provided sufficient data for our investigation. Indeed, our original intention was to recruit 20 participants, feeling this would provide sufficient saturation (Francis et al. 2010 ; Guest et al. 2006 ) but when we reached 20, and as we had already started analysis (cf. Braun and Clarke 2021 ) as we ourselves transcribed the interviews (Bird 2005 ) we wanted to explore understandings of ‘qualitative’ and ‘quantitative’ with other subject fields. As Table 1 shows, ‘English Literature’, ‘Philosophy, and ‘Sculpture’ were only explored after interview 20. These additional subject fields added significantly (see below) to our data.

5.3 Analysis and participant researcher details

Our analysis followed Braun and Clarke’s ( 2006 ) thematic analysis. Given the study’s exploratory constructionist nature, we combined ‘top down’ deductive type analysis for anticipated themes, and ‘bottom up’ inductive type analysis for any unexpected themes. The latter was similar to a constructivist grounded theory analysis (Charmaz 2010 ) whereby the transcripts were explored through close repeated reading for themes to emerge from the bottom up. We deliberately did not use any CAQDAS software such as NVivo as we wanted to manually read the scripts in one lengthy word document. We recognise that such software could allow us to do this but we were familiar with the approach we used and have found it effective for a number of years. We thus continued to use it here as well. We counted instances of themes through cross-checking after reading transcripts and discussing them, thereby heightening reliability and validity (Golafshani 2003 ). All interviews were undertaken with informed consent and participants were assured all representation was anonymous (Christians 2011 ). The study was approved by relevant ethics committees. Table 1 above shows the subject area, years of experience, and first language of the participant researchers. We also bracket after each subject area whether we consider it to be ‘Science’ or ‘Arts’ or whether we consider them as ‘Arts/Science’ or ‘Science/Arts’. This is of course subjective and in many ways not possible to do, but we were guided in how we categorised these subjects by doing so according to how we feel the methodology sources form the literature review study would categorize them.

5.4 Presentation of the interview study data compared with data from the literature review study

We present our interview study data in the three broad areas that emerged through analysis. Our approach to thematic analysis was to deductively code the interview transcripts manually under the three broad areas of: where data aligns with textbook and key source ‘binary’ representations; where the data contrasts with such representations; and where the data relates to interviewee perceptions of the value of ‘qualitative’ and ‘quantitative’. The latter relates to whether participant researchers expressed views that suggested they considered each approach to be useful, valuable, or not. We also read through the transcripts inductively with a view to being open to emerging and unanticipated themes. For each data citation, we note the subject field to show the range of subject areas. We later discuss these data in terms of their implications for research values, assumptions and practices and for their use when teaching about different methods. We provide illustrative citations and numbers of participant researchers who commented in relation to the key points below, but first provide an overview in Table 2 .

5.4.1 Theme 1: Alignments with ‘binary’ textbook and key source representations

The data often aligned with textbook representations. Seven participant researchers explicitly said, or alluded to the representation that ‘quantitative’ is positivist and seeks objectivity whereas ‘qualitative’ is more constructivist and subjective. For example: “the main distinction… is that qualitative is associated with subjectivity and quantitative being objective.” This was because “traditionally quantitative methods they’ve been associated with the positivist scientific model of research whereas qualitative methods are rooted in the constructivist and interpretivist model” (Psychology). Similarly, “quantitative methods… I see that as more… logical to a scientific mode of generating knowledge so… largely depends on numbers to establish causal relations… qualitative, I want to more broadly summarize that as anything other than numbers” (Communication Studies). One Statistics researcher had “always associated quantitative research more with statistics and numbers… you measure something… I think qualitative… you make a statement… without saying to what extent so… so you run fast but it’s not clear how fast you actually run…. that doesn’t tell you much because it doesn’t tell you how fast.” One mathematics participant researcher said mathematics was “ super quantitative… more beyond quantitative in the sense that not only is there a measurement of size in everything but everything is defined in… really careful terms… in how that quantity kind of interacts with other quantities that are defined so in that sense it’s kind of beyond quantitative.” Further, this applied at pre-data and data integration stages. Conversely, ‘qualitative’ “would be more a kind of verbalistic form of reasoning or… logic.”

Another representation four participant researchers noted was that ‘quantitative ‘ has structured predetermined questions whereas ‘qualitative’ has initially general questions that became more focused as research proceeded. For example, in Tourism, “with qualitative research I would go with open ended questions whereas with quantitative research I would go with closed questions.” This was because ‘qualitative’ was more exploratory: “quantitative methods… I would use when the parameters… are well understood, qualitative research is when I’m dealing with topics where I’m not entirely sure about… the answers.” As one Psychology participant researcher commented: “the main assumption in quantitative… is one single answer… whereas qualitative approaches embrace… multiplicity.”

Nineteen participant researchers considered ‘quantitative’ numbers whereas ‘qualitative’ was anything except numbers. For example, “quantitative research… you’re generating numbers and the analysis is involving numbers… qualitative is… usually… text-based looking for something else… not condensing it down to numbers” (Psychology). Similarly, ‘quantitative’ was “largely… numeric… the arrangement of larger scale patterns” whereas, “in design field, the idea of qualitative…is about the measure… people put against something… not [a] numerical measure” (Design). One participant researcher elaborated about Biology and Ecology, noting that “quantitative it’s a number it’s an amount of something… associated with a numerical dimension… whereas… qualitative data and… observations… in biology…. you’re looking at electron micrographs… you may want to describe those things… purely in… QUALitative terms… and you can do the same in… Ecology” (Human Computer Interaction). One participant researcher also commented on the magnitude of ‘quantitative’ data often involving more than numbers, or having a complex involvement with numbers: “I was thinking… quantitative… just involves numbers…. but it’s not… if… NVivo… counts the occurrence of a word… it’s done in a very structured way…. to the point that you can even… then do statistical analysis” (Logistics).

Regarding mixed methods, data aligned with the textbook representations that there are two distinct ‘camps’ but also that these could be crossed. Six participants felt opposing camps and paradigms existed. For example, in Nursing, that “it does feel quite divided in Nursing I think you’re either a qualitative or a quantitative researcher there’s two different schools… yeah some people in our school would be very anti-qualitative.” Similarly, in Music one participant researcher felt “it is very split and you’ll find… some people position themselves in one or the other of those camps and are reluctant to consider the other side. In Psychology, “yes… they’re quite… territorial and passionately defensive about the rightness of their own approaches so there’s this… narrative that these two paradigms… of positivistic and interpretivist type… cannot be crossed… you need to belong to one camp.” Also, in Communication Studies, “I do think they are kind of mutually exclusive although I accept… they can be combined… but I don’t think they, they fundamentally… speak to each other.” One Linguistics participant researcher felt some Linguists were highly qualitative and never used numbers, but “then you have… the corpus analysts who quantify everything and always under the headline ‘Corpus linguistics finally gets to the point… where we get rid of researcher bias; it objectifies the analysis’ because you have big numbers and you have statistical values and therefore… it’s led by the data not by the researcher.” This participant researcher found such striving for objectivity a “very strange thing” as any choice was based on previously argued ideas, which themselves could not be objective: “because all the decisions that you need to put into which software am I using, which algorithm am I using, which text do I put in…. this is all driven by ideas.”

Nevertheless, three participant researchers felt the approaches not diametrically opposed. For example, the same Psychology participant researcher cited immediately above felt people’s views could change: “some people although highly defensive over time… may soften their view as mixed method approaches become more prominent.” Comparatively flexibly, a Historian commented “I don’t feel very concerned by the division between qualitative and quantitative; I think they’re just two that are separate sometimes complementary approaches to study history.” In Translation and Interpreting, one participant researcher said methods could be quantitative, but have qualitative analysis, saying one project had: “an excellent use of quantitative tools… followed by not a qualitative method but qualitative analysis of what that implied.” Thus, much of the data did align with the binary representations of the key textbooks reviewed above and also the representation that approaches could be combined.

5.4.2 Theme 2: Contrasts with ‘binary’ textbook and key source representations

One recurrent contrast with common textbook representations was where both qualitative and quantitative were used in some sciences; nine participant researchers felt this. For example, in Geotechnics, when ascertaining soil behaviour: “the first check, the Qualitative check is to look whether those [the traditional and new paths of soil direction] bear resemblance, [be] coz if that doesn’t have that shape how can I expect there to be a quantitative comparison or… fit.” Both qualitative and quantitative approaches combined helped “rule out coincidence” and using both represented “a check which moves through qualitative… to quantitative.” Quantitative was a “capital Q for want of a better expression” and consisted of ‘bigger numbers’, which constituted “the quantitative or calculated strength.” However, this ‘capital Q’ quantitative data aimed to quantify a qualitatively measured numerically estimated phenomenon. So both were numerical. Nevertheless, over the long-term, even the quantitative became less certain because: “when you introduce that time element… you create… circumstances in which you need to be careful with the way you define the strength… different people have come up with different values… so the quantitative match has to be done with an element of uncertainty.”

Similarly, in Chemistry, both qualitative and quantitative methods and analysis were used, where “ the qualitative is the first one, and after you have the other ones [I—Right to kind of verify] if… if you need that.” Both were used because, “we need to know what is there and how much of each component is there… and a knowledge of what is there is a qualitative one, how much of each one is a quantitative one.” Moreover, “they are analysed sometimes by the same technique ” which could be quantitative or qualitative: “[I—and chromatography, again… would that be qualitative or quantitative or both?] Both, both… the quantitative is the area of the peak, the qualitative is the position in which this characteristic appears.” Here, both were key, and depending on the research goal: “we… use them according to what we need… sometimes it’s enough to detect [qualitative] … other times you need to know how much [quantitative] ”.

For Biology also, both were key: “quantitative is the facts and… qualitative is the theory you’re trying to make fit to the facts you can’t do it the other way around… the quantitative data… just doesn’t tell you anything without the qualitative imagination of what does it mean?” Inversely, in an area commonly understood as quantitative, Statistics, the qualitative was an initial, hypothetical stage requiring later quantitative testing. For example: “very often the hypothesis is a qualitative hypothesis” and then, “you would test it by putting in all sorts of data and then the test result would give you a p-value… and the p-value of course is quantitative because that’s a number.”

In Engineering, both helped research sound frequencies: “we need to measure the spectrum of the different frequencies… created… all those things were quantifiable, but then we need to get participants to listen and tell us… which one do you prefer?… this is a qualitative answer.” Mathematical Biology also used both: qualitative for change in nature of a state, and quantitative for the magnitude of that change. Here: “quantitative changes the numerical value of the steady state but it doesn’t change its stability… but qualitative change is when you… change the parameters and you either change its stability or you change whether it exists or not… and that point over which you cross to change it from being stable to unstable is called a bifurcation point… that’s where I use quantitative and qualitative the most in my research.”

The idea of ‘quantitative’ involving large data sets was expressed; however, the ‘qualitative’ could help represent these. In Computing Mathematics one participant researcher commented that: “quantitative… I do almost 90% of the time…. calculating metrics… and using significance testing to determine whether the numbers mean anything.” Yet, this participant researcher also used qualitative representations for simplified visual representation of large number sets: “I think for me QUALitative work is almost always about visualizing things in a way that tries to illustrate the trends… so I’m not actually calculating numbers but I’m just saying if I somehow present it in in this way.” Concomitantly, ‘quantitative’ could be smaller scale. For example, in Architecture: “my expectation is it wouldn’t be valid until you have a certain quantity of response but that said [I] have had students use… quantitative analysis on a small sample.” Similarly, in History: “you could have a quantitative study of a small data set or a small… number of statistics I really think it’s determined by the questions… you’re asking.”

Interestingly, two participant researchers questioned their colleagues’ understandings of ‘quantitative’ and of ‘numbers’. For example, one Mathematician considered some researchers did not know what ‘quantitative’ meant, because “when they say quantitative… I think what they mean is the same as qualitative except it’s got numbers in it somewhere.” For example, “I’m talking to a guy who does research in pain and, so I do know now what he means by quantitative research, and what he means is that he doesn’t know what he means [both laugh] and he wants me to define what it means… I think he means he wants some form of modelling with data and… he’s not quite sure how to go about doing that.” For this Mathematician, engineers would, “Mean that purposefully when they talk about quantitative modelling” whereas, “generically you know when politicians [consider these things] quantitative just means there’s a number in it somewhere.”

Three participant researchers felt that when ‘quantitative’ involved human elements or decisions, subjectivity was inevitable. One Logistics participant researcher felt someone doing materials research was “Doing these highly quantitative analyses still there is a degree of subjectivity because… this involves human assessment… they’re using different photometric equipment… taking photos… what is the angle.” Another researcher in Sciences similarly noted, “I don’t know why people believe in machines so much because they’re built by humans and there’s so many errors.” An Engineer commented: “To me, just the involvement of humans… gives it a qualitative element no matter what.” For this researcher, with people’s ‘quantitative’ reaction times and memory recall, “I would call that again qualitative you know… yes we did quantify the reaction time… the correct number of answers, but… it’s a person… I could get somebody else now doing it and not get exactly the same answer, so that uncertainty of human participants to me make it a qualitative approach.” For this participant researcher, anything involving human participants was ‘qualitative’: “I would say anything that is measurable, but by measurable I mean physically measurable… or predictable through numbers is quantitative [and] anything that involves a judgment, therefore human participants… is qualitative.”

‘Qualitative’ was often highly subject-specific. For example, in Film Studies and Media—English, ‘qualitative’ was: “about… the qualities of particular texts…. I’ve read a lot about silence as a texture and a technique in cinema… so silence is a quality, and also what are the qualities of that silence.” One Sciences researcher felt ‘qualitative’ involved experience applied to interpreting data: “Qualitative I would define as using your own experience to see if the data makes sense… and… something that… cannot be measured so far by machine… like the shape of a tree.” One Historian also highlighted the importance of subject-sub-branches, saying, “I’d situate myself in history but I guess you’d probably get a different response depending on… whether that historian saw themselves as a cultural historian or as a social and economic historian or… an intellectual historian.”

A fluidity regarding ‘quantitative’ and ‘qualitative’ was characterized. One Human Computer Interaction participant researcher commented, “I think sometimes people can use both terms quite loosely without really sort of thinking about [them] .” Comparatively, one Psychology participant researcher commented that “even within the Qual[itative] people they disagree about how to do things [laughs] … so you have people talking about doing IPA [Interpretative Phenomenological Analysis] and they’re doing… and presenting it in completely different ways.” Another Psychologist felt using ‘quantitative’ and ‘qualitative’ as an ‘either/or’ binary division erroneously suggested all questions were answerable, whereas: “no method… can… answer this question… and this is something… many people I don’t think are getting is that those different methodologies come with huge limitations… and as a researcher you need… to appreciate… how far your work can go.” One Communication Studies participant researcher even perceived the terms were becoming less used in all disciplines, and that, “we’re certainly in a phase where even these labels now are becoming so arbitrary almost… that they’re not, not carrying a lot of meaning.” However, the terms were considered very context dependent: “I think I’d be very hesitant about… pigeonholing any particular method I’d want to look very closely at the specific context in which that particular method or methodology is being used.” Further, some concepts were considered challenging to align with textbook representations. One German Literature participant researcher, reflecting on how the ‘theoretical’ worked, concluded, “… the theoretical… I’m not sure whether… that is actually within the terms quantitative or qualitative or whether that’s a term… on a different level altogether .” Indeed, many participant researchers (nine in total across many subject areas e.g. Design, Film and Media, Philosophy, Mathematical Biology) confirmed they were fully aware of the commonplace representations, but felt they did not apply to their own research, only using them to communicate with particular audiences (see below).

5.4.3 Theme 3: Perceptions on the value of ‘Quantitative’ and ‘Qualitative’ methods and approaches

As the data above show, many participant researchers valued both ‘quantitative’ and ‘qualitative’, including many scientists (in Geotechnics; Biology, Chemistry, Engineering). Many considered the specific research question key. For example: “I certainly don’t think quantitative bad, qualitative good: it’s horses for courses, yeah” (Tourism). Participant researchers in History and Music Education felt similarly; the latter commenting how “I do feel it’s about using the right tools which is why I wouldn’t want to… enter into this kind of vitriolic negative mud-slinging thing that does happen within the fields because I think people… get too entrenched in one or the other and forget about the fact that these are just various ways to approach inquiry.” Similarly, one Psychologist observed, “I’m always slightly irritated [laughs] when I hear people you know say ‘Oh I’m only doing… qualitative research’ or ‘I’m only doing quantitative research’… I think it’s the research question that should drive the methodological choices.” This participant researcher had “seen good quality in both quantitative and qualitative research.”

Five participant researchers considered quantitative approaches to be of little value if they were applied inappropriately. For example, a Translation and Interpreting participant researcher felt quantitative data-generating eye tracking technology was useful “for marketing,… product placement,… [or] surgeons.” However, for Translation and Interpreting, “I don’t think… it is a method that would yield results… you could find better in a more nuanced manner through other methods, interviews or focus groups, or even ethnographic observation.” One Chemist questioned the value of quantitative methods when the sample was too small. For example, when students were asked about their feedback on classes, and one student in 16 evaluated the classes badly, “4% it was one person [laughs] in 16, one person, but I received that evaluation and I think this is not correct… because sometimes…. I think that one person probably he or she didn’t like me… well, it’s life, so I think these aspects… may happen also but it’s with the precision of the system… the capacity of the system to detect and to measure.” Meaningfulness was held to be key: “When we do the analysis the sample has meaning” . Similarly, a Theoretical Physicist felt quantitative approaches unsuited to education: “in the context of education… we all produce data all the time… we grade students… we assess creativity… people will say… ‘you measure somebody's IQ using this made-up test and you get this kind of statis[tic]..’ and then you realize that all of those things are just bogus… or at least… doesn't measure anything of any real serious significance.” Comparatively, one participant researcher in Design felt ‘quantitative’ had a danger to “lead to stereotypes”; for example, when modern search engines use quantitative data to direct people to particular choices, “There’s potential there to constrain kind of broader behaviours and thinking… and therefore it can become a programmer in its own right.” One Mathematical Biologist commented how statistics can be misused, and how a popular Maths book related “How statistics are a light shone on a particular story from a particular angle to paint a picture that people want you to see but… it’s almost never the whole picture, it’s a half-truth, if you like, at best.”

Seven participant researchers considered that their disciplines valued quantitative over qualitative. This could be non-judgmental, and perhaps inherent in major areas of a discipline, as in Theoretical Physics, where precision is crucial, although this was said not to be ‘disparaging’: “theoretical physics… or physics in general… we… tend to think of ourselves as being very, very quantitative and very precise, and we think of qualitative, I guess… as being a bit vague, right?… which is not disparaging, because sometimes… we have to be a bit vague… and we're working things out.” In Psychology, however, despite “a call to advocate for more qualitative methods”, there, “definitely… is a bias toward quantitative… measures in psychology; all the high impact factor journals advocate for quantitative measures.” In Nursing, quantitative was also deemed paramount, with “the randomized control trial seen as being… you know the apex and… some researchers in our school would absolutely say it’s the only reliable thing… would be very anti-qualitative.”

Yet, four participant researchers were positively oriented towards anything qualitative. For example, one Tourism researcher felt that, “in an uncertain world, such as the one we’re living in today, qualitative research is the way forward.” Also, an Architect highlighted that in one of their studies, “I think the most important finding of my questionnaires was in the subjective comments.” One Music education participant researcher personally favoured qualitative approaches but regretted how their field was biased toward quantitative data, saying they had been informed: “ ‘what journals really care about is that p-value…’ and I remember… thinking… that’s a whole area of humanity… you’re failing to acknowledge.”

Nevertheless, side-stepping this debate, nine researchers considered the terms of little value, and simply irrelevant for their own research. One Film and Media—English participant researcher commented: “I have to say… these are terms I’m obviously familiar with, but… not terms… I… tend to really use in my own research… to describe what I do … mainly because everything that I do is qualitative.” As an English Literature participant researcher noted in email correspondence: “they are not terms we use in literary research, probably because most of what we do is interpretation of texts and substantiating arguments through examples. I have really only encountered these terms in the context of teaching and have never used them myself.” In the interview, this participant researcher commented that “I can imagine… they would be terms… quite common in the sciences and mathematics, but not Social Sciences and Arts.” A German Literature participant researcher felt similarly, commenting that in “German Literature… the term quantitative hadn’t even entered my vocabulary all the way through the PhD [laughs] … because… you could argue the methods in literary research are always qualitative.”

Complementing such perspectives, in Theoretical Physics ‘qualitative’ and ‘quantitative’ was: “not something that ever comes up… I don’t think I read a paper ever that will say we do qualitative research in any way, but I never… or hardly ever handle any data… I just have a bunch of principles that are sort of either taken to be true or are… a model… we’re exploring.” In Mathematics, ‘quantitative’ was simply never used as all mathematics research was quantitative: “I never use the word in the company of my colleagues, never, it’s a non-vocabulary word, for the simple reason that when everything is so well defined why do you need a generic term when you’ve got very specific reference points in the language that you’re using?”.

One Philosopher felt the terms did not fit conceptual analysis in philosophy, given that the object of consideration was uncertain: “I guess… I thought it didn’t fit conceptual analysis… you need to know what you’re dealing with in order to then do the quantitative or qualitative whereas in philosophy it feels like… you don’t quite know what you’re dealing with you’re trying to work out… what are rights?… What is knowledge? What is love?… and then look at its qualities.” For this researcher, Philosophy was tentatively pre-quantitative or pre-qualitative, because philosophy “feels like it’s before then.” The terms were not considered valuable for Philosophy or for the humanities generally: “in philosophy we wouldn’t use the term qualitative or quantitative research… you just use the tools… you need… to develop your argument and so you don’t see the distinction… I would say in the humanities that’s relatively similar.” Further, a Fine Art—Sculpture participant researcher said: “they’re not words I would use… partly because… I’m engaged with… through… research and… teaching… what I’d call practice research… and… my background’s in fine art, predominantly in making sculpture and that doesn’t contain knowledge.” Here, the participant researcher related how they may consider a student’s work hideous but if the student had learned a lot through creating the work, they should be rewarded. This participant researcher spoke of a famous sound artist, concluding, “if you asked him about qualitative and quantitative… it just wouldn’t come into his thing at all…. He doesn’t need to say well there were a thousand visitors plus you know it’s just ‘bang’… he wouldn’t think about those things… not as an artist.”

Six participant researchers said they only ever used the terms for particular audiences. For example, for ‘quantitative’ in Film and Media: “the only time is when it’s been related to public engagement that we’ve ever sort of produced anything that is more along quantitative lines,” and that “it was not complex data we were giving them.” In Fine-Art Sculpture, too, the terms were solely used with a funder, for example, to measure attendance at an exhibition for impact, but “that’s not the type of research that I’m involved with necessarily.” One Logistics participant researcher commented that “it really depends on the audience how you define qualitative or quantitative.” For this researcher, if communicating with “statisticians econometricians or a bunch of people who are number crunchers” then “they will be very precise on what quantitative is and what qualitative is” and would only recognise mathematical techniques as quantitative. Indeed, “they wouldn’t even recognize Excel as quantitative because it’s not that hard.” In contrast, for social scientists, Excel would be quantitative, as would “anything to do with numbers… I suppose you know a questionnaire where you have to analyse responses would be probably classed as quantitative.”

Conversely, a Mathematical Biology participant researcher commented they had been doing far more public outreach work, “using quantitative data so numbers… even with things that might often be treated in a qualitative way… so stuff which… is often treated I think qualitatively we try to quantify… I think partly because it’s easier to make those comparisons when you quantify something.” One researcher in Communication Studies said they advised a student that “it depends on your research objectives; if you are focusing on individual experiences… I think naturally that’s going towards qualitative, but if you’re … doing this research oriented to a leader of … [a] big number of people… for informing policy… then you need some sort of insights that can be standardized… so it’s a choice.”

Another Communication participant researcher felt political shifts in the 1990s and 2000s meant that a ‘third way’ now dominated with a move towards hybridity and a breakdown in ‘qualitative’ and ‘quantitative’ with everything now tied to neoliberalism. Therefore, since “the late 90s and early noughties I’ve seen this kind of hybridity in research methods almost as being in parallel with the third way there seems to be… no longer opposition between left and right everything… just happens to buy into neoliberalism so likewise… with research methods… there’s a breakdown of qual and quant.” Comparatively, a Historian felt underpinning power structures informed approaches, commenting that “the problem is not the terminology it’s the way in which power is working in the society in which we live in that’s the root problem it seems to me and what’s valued and what’s not.” A Philosopher felt numbers appealed to management even when qualitative data were more suitable: “I think management partly… are always more willing to listen to numbers… finding the right number can persuade people of things that actually… you think really a better persuasion would do something more qualitative and in context.” One Fine Art participant researcher felt ‘quantitative’ and ‘qualitative’ only became important when they focused on processes related to the Research Excellence Framework but not for their research as such: “I guess we are using qualitative and quantitative things in the sense of moving ourselves through the process as academics but that’s not what I’d call research.”

6 Discussion: implications for teaching research methods

Research Methods teaching for undergraduate, postgraduate and newer researchers is commonly guided by textbook and seminal text understandings of what constitutes ‘qualitative’ and ‘quantitative’. Often, the two are treated in parallel, or interlinked, and used in combination or sequentially in research. But the relations between these are complex. The above analysis of the interview study with established participant researchers underlines and often extends this complexity, with implications for how such methodologies are approached and taught. Many of these participant researchers in disciplines commonly located within an ostensibly ‘positivist’ scientific tradition are, in fact, using qualitative methods as scientific procedures. They do so to provide initial measurements of phenomena before later using quantitative procedures to measure the quantity of a quality. They also use quantitative procedures to reveal data for which they subsequently use qualitative approaches to interpret and understand through their creative imaginations or experience. Participant researchers in ostensibly positivist disciplines describe themselves as doubting ‘facts’ measured by machines programmed by humans or doubting the certainty of quantitative data over time. Critically, these participant researchers engage in debate over what a ‘number’ is and the extent to which ‘numbers’ can be considered ‘quantitative’. One mathematician spoke of how many individuals do not know what they mean by the word ‘quantitative’, and an engineer interpreted any numbers involving human judgements as ‘qualitative’. Both a chemist and a geotechnician routinely defined and use ‘qualitative’ methods and analysis to arrive at numerical values.

Although this analysis of participant researchers’ reported practices refutes many textbook and key research methods source representations of quantitative and qualitative as being binary and separately ringfenced entities (contra e.g. Punch 2005 ; Goertz and Mahoney 2012 ), they resonate with much recent and current literature in the field (e.g. Uher 2022 ; De Gregorio 2014 ). In some disciplines, participant researchers only do a particular type of research and never need anything other than clear ‘quantitative’ definitions (Mathematics); others only ever conduct research involving text and never numbers (Literature). Further, other participant researchers considered how certain aspects lie outside the ‘qualitative’ or ‘quantitative’ (the ‘theoretical’ in German Literature), or they did research which they maintain does not contain ‘knowledge’ (Fine-Art Sculpture), while others do foundational ‘conceptual’ research which they claim comes at a stage before any quantity or quality can be assessed (Philosophy). Nine researchers considered the terms of little relevance at all to their subject areas.

This leads to subsequent questions. Firstly, do the apparently emerging tensions and contradictions between commonplace textbook and key source presentations and on-the-ground participant researcher practices matter? Secondly, what kind of discourse might reframe the more conventional one?

Regarding whether tensions and contradictions matter: in one practical way, perhaps not, since participant researchers in all these areas continue to be productive in their current research practices. Nevertheless, the foundations of the binary quantitative and qualitative divide are discourse expressions common to research methods courses. These expressions frame how the two terms are understood as the guide for novices to do research. This guiding discourse is evident in specifically designated chapters in research handbooks, in session titles in university research methods modules, and in entries for explanations of research terms within glossaries. The literature review study detailed above illustrates this. ‘Quantitative’ means numbers, ‘qualitative’ means words. ‘Quantitative’ connotes positivist, objective, scientific; ‘qualitative’ implies constructivist, subjective, non-science-based. Arguably, any acceptance of the commonplace research method understanding gives an apparent solidity which can sometimes be a false basis that masks the complexities or inadequacies involved. Such masking can, in turn, allow certain agencies or individuals to claim their policies and practices are based on ‘objective’ numerical data when they are merely framing something as ‘quantitative’ when, as a cited Mathematician participant researcher observed above, it is simply something with a number in it somewhere. Conventionally, limitations are mentioned in research studies, but often they seem ritualized remarks which refer to insufficient numbers, or restricted types of participants, or a constrained focus on a particular area. Rarely do research studies (let alone handbooks and guides for postgraduates) question a taken-for-granted understanding, such as whether the very idea of using numbers with human participants may mean the number is not objective. Ironically, it is the field of Qualitative Inquiry itself in which occasionally some of these issues are mentioned. Concurrently, while the quantitative is promoted as ‘scientific’ and ‘objective evidence’, we find some scientists researching in sciences often question the terms, or consciously set them aside in their practices.

Concerning what could replace the commonplace terms and reframe the research discourse environment: arguably, any discussion of ‘quantitative’/‘qualitative’ should be preceded by key questions of how they are understood by researchers. Hammersley ( 2013 ) has suggested the value of a more nuanced approach. As the Communication Studies participant researcher here commented, the two terms seem to be breaking down somewhat. Nevertheless, alongside the data and arguments here, we see some value in considering things as being ‘quantitative’ or ‘qualitative’, and other value in viewing them as separate. The terms can still be simply outlined, not just as methodological listings of characteristics, but as a critical point, Outlines of methods should include insider practitioner views—illustrations of how they are used and understood by practising researchers in different disciplines (as in Table 2 above). This simple suggestion has benefits. When outlining approaches as qualitative or quantitative, we suggest space is devoted to how this is understood in disciplines, together with the opportunity to question the issues raised by these understandings. This would help to position the understandings of qualitative and quantitative within specific disciplinary contexts, especially in inter-disciplinary fields and, implicitly, it encourages reflection on the objectivity and subjectivity evoked by the terms. Such discussion can be included in research methods texts and in research methods courses, dissertations and frameworks for viva examinations (Cortazzi and Jin 2021 ). Here, rather than start with outlining what the terms mean by using concrete definitions such as ‘Quantitative means X’ the terms should be outlined using subject contextualised phrases such as ‘In the field of X quantitative is understood to mean Y’. In this way, quantitative and qualitative methods and approaches can be seen, understood and contextualised within their subject areas, rather than prescriptively outlined in a generic or common form. Furthermore, if the field is one that has no use for such terms, this can also be stated, to prevent any unnecessary need for their use. Discourse around the terms can be extended if they are seen in line with much current literature and the data above that shows their complexities and overlaps, and goes beyond the binary choices and representations of many textbooks.

7 Conclusion

This paper has presented and discussed data from an interview study with experienced participant researchers (n = 31) regarding their perceptions of ‘qualitative’ and ‘quantitative’ in their research areas. This interview study data was compared with findings from a literature review study of common textbooks and research methods publications (n = 25) that showed often binary and reified representations of the terms and related concepts. The interview study data show many participant researcher understandings do in some ways align with the binary and commonplace representations of ‘qualitative ‘and ‘quantitative’ as shown to be presented in many research methods textbooks and sources from the literature review study. However, the interview study data more often illustrate how such representations are somewhat inaccurate regarding how research is undertaken in the different areas researched by the participant researchers. Rather, they corroborate much of the current literature that shows the blurring and complexity of the terms. Often, they extend this complexity. Sometimes they bypass complexity when these terms are considered irrelevant to their research fields by many researcher participants. For some researchers, the terms are simply valueless. We propose that future research methods courses could present and discuss the data above, perhaps using something akin to Table 2 as a starting point, so that students and novice researchers are able to loosen or break free of the chains of any stereotypical representations of such terms or use them reflectively with awareness of disciplinary specific usage. This could help them to advance their research, recognizing complex caveats related to the boundaries of what they do, what methods they use, and how to conduct research using both quantitative and qualitative approaches, as interpreted and used in their own fields. In multi- or inter-disciplinary research, such reflective awareness seems essential. Future research could also study the impact of the use of the data here in research methods courses so that such courses encompass both qualitative and quantitative methods (cf. Onwuegbuzie and Leech 2005 ) yet also question and contextualise such terms in specific subject areas order to free research from any constraints created by binary representations of the terms.

Whilst we interviewed 31 participant researchers to approach what seems a reasonable level of saturation, clearly future research could add to what we have found here by speaking to a wider range and larger number of researchers. The 25 research methods sources in English (supplemented by 23 sources in German, Spanish and French) examined here can clearly be expanded for a wider analysis of ‘quantitative’ and ‘qualitative’ in other languages for a more comprehensive European perspective. This strategy might ascertain likely asymmetries between the numerous English language texts (and their translations) and relatively smaller numbers of texts written by national or local experts in other languages. As a world-wide consideration, given the relative paucity of published research guidance in many languages, this point is especially significant related to fitting research methods to local contexts and cultures without imposition. Translating and discussing the terms ‘qualitative’ and ‘quantitative’, in and beyond European languages, will need care to avoid binary stereotyped or formulaic expression and to maintain some of the insight, resonances and complexities shown here.

Aspers, P., Corte, U.: What is qualitative in qualitative research. Qual. Sociol. 42 (2), 139–160 (2019)

Article   Google Scholar  

Atkinson, K.M., Koenka, A.C., Sanchez, C.E., Moshontz, H., Cooper, H.: Reporting standards for literature searches and report inclusion criteria: making research syntheses more transparent and easy to replicate. Res. Synth. Methods 6 (1), 87–95 (2015)

Autran, D., Bassel, G.W., Chae, E., Ezer, D., Ferjani, A., Fleck, C., Wolf, S.: What is quantitative plant biology? Quant. Plant Biol. (2021). https://doi.org/10.1017/qpb.2021.8

Bakhtin, M.: The dialogic imagination. University of Texas Press, Austin (1981)

Google Scholar  

Bakhtin, M. M. Speech genres and other late essays. In: Trans. Vern W. McGee; Ed. Caryl Emerson and Michael Holquist. University of Texas Press, Austin, (1986)

Barone, T.: A return to the gold standard? Questioning the future of narrative construction as educational research. Qual. Inq. 13 (4), 454–470 (2007)

Bell, J., Waters, S.: Doing your research project: a guide for first-time researchers (6 th edit.). McGraw-Hill Education, London, (2014)

Bird, C.M.: How I stopped dreading and learned to love transcription. Qual. Inq. 11 (2), 226–248 (2005)

Bloch, M.: A discourse that disciplines, governs, and regulates: The national research c report on scientific research in education. Qual. Inq. 10 (1), 96–110 (2004)

Bloor, M., Wood, F.: Keywords in qualitative methods: A vocabulary of research concepts. Sage, London (2006)

Book   Google Scholar  

Braun, V., Clarke, V.: Using thematic analysis in psychology. Qual. Res. Psychol. 3 (2), 77–101 (2006). https://doi.org/10.1191/1478088706qp063oa

Braun, V., Clarke, V.: To saturate or not to saturate? Questioning data saturation as a useful concept for thematic analysis and sample-size rationales. Qualit. Res. Sport Exer. Health 13 (2), 201–216 (2021)

Cambridge: Cambridge Dictionary. English Dictionary. Available at: https://dictionary.cambridge.org/dictionary/english/ Last Accessed January 2023. (2022)

Chan, E.S., Okumus, F., Chan, W.: What hinders hotels’ adoption of environmental technologies: a quantitative study. Int. J. Hosp. Manag. 84 , 102324 (2020)

Charmaz, K.: Grounded theory. Objectivist and constructivist methods. In W. Luttrell (Ed.), Qualitative educational research: Readings in reflexive methodology and transformative practice (pp. 183–207). Routledge, New York. (2010)

Christians, C.G.: Ethics and politics in qualitative research. In: Denzin, N.K., Lincoln, Y.S. (eds.) The Sage handbook of qualitative research, pp. 61–80. Sage, Thousand Oaks, CA (2011)

Cortazzi, M., Pilcher, N., Jin, L.: Language choices and ‘blind shadows’: investigating interviews with Chinese participants. Qual. Res. 11 (5), 505–535 (2011)

Cortazzi, M., Jin, L.: The doctoral viva: questions for, with and to candidates (or supervisors). Int. J. Educat. Lit. Stud. 9 (4), 2–15 (2021)

Creswell, J.W.: Research design: Qualitative & quantitative approaches. Sage, Thousand Oaks, CA (1995)

Davies, M.B., Hughes, N.: Doing a successful research project: Using qualitative or quantitative methods. Macmillan International Higher Education, London (2014)

Dawson, C.: Introduction to Research Methods: A Practical Guide for Anyone Undertaking a Research Project, 5th edn. Robinson, London (2019)

De Gregorio, E.: Bridging “quality” and “quantity” in the study of criminal action. Qual. Quant. 48 (1), 197–215 (2014)

Denzin, N.K., Lincoln, Y.S. (eds.): The landscape of qualitative research: Theories and issues. Sage, Thousand Oaks (1998)

Denzin, N.K., Lincoln, Y.S. (eds.): The SAGE handbook of qualitative research (4th edit). Sage, Thousand Oaks (2011)

Flyvbjerg, B.: Five misunderstandings about case-study research. Qual. Inq. 12 (2), 219–245 (2006)

Francis, J.J., Johnston, M., Robertson, C., Glidewell, L., Entwistle, V., Eccles, M.P., Grimshaw, J.M.: What is an adequate sample size? Operationalising data saturation for theory-based interview studies. Psychol. Health 25 (10), 1229–1245 (2010)

Goertz, G., Mahoney, J.: A tale of two cultures. Princeton University Press, New Jersey (2012)

Golafshani, N.: Understanding reliability and validity in qualitative research the qualitative report, vol. 8 no. 4 597–607. (2003). http://www.nova.edu/ssss/QR/QR8-4/golafshani.pd

Grix, J.: The undations of research. Palgrave Macmillan, New York (2004)

Guba, E.G, Lincoln, Y.S: Competing paradigms in qualitative research. In: Denzin, N.K. Lincoln, Y.S. (eds.) Handbook of qualitative research, pp. 105–117. Sage, Thousand Oaks, 1994

Guest, G., Bunce, A., Johnson, L.: How many interviews are enough? An experiment with data saturation and variability. Field Methods 18 (1), 59–82 (2006)

Hammersley, M.: What is qualitative research? Bloomsbury Academic, London (2013)

Hanson, B.: Wither qualitative/quantitative? Grounds for methodological convergence. Qual. Quant. 42 , 97–111 (2008)

Heyvaert, M., Maes, B., Onghena, P.: Mixed methods research synthesis: definition, framework, and potential. Qual. Quant. 47 (2), 659–676 (2013)

Howe, K.R.: Against the quantitative-qualitative incompatibility thesis or dogmas die hard. Educ. Res. 17 (8), 10–16 (1988)

Johnson, R.B., Onwuegbuzie, A.J., Turner, L.A.: Toward a definition of mixed methods research. J. Mixed Methods Res. 1 (2), 112–133 (2007)

Kumar, R.: Research methodologies: a step-by-step guide for beginners. Sage, London (1996)

Levitt, H.M., Bamberg, M., Creswell, J.W., Frost, D.M., Josselson, R., Suarez-Orozco, C.: Journal article reporting standards for qualitative research in psychology: The APA publications and communications board task force report. Am. Psychol. 73 (1), 26–46 (2018)

Lincoln, Y.S., Guba, E.G.: Naturalistic inquiry. Sage, Thousand Oaks (1985)

Marsh, C.: Problems with surveys: method or epistemology? Sociology 13 (2), 293–305 (1979)

Marsh, D., Furlong, P.: A skin, not a sweater: ontology and epistemology in political science. Theory Methods Polit. Sci. 2 (1), 17–41 (2002)

Mason, J.: Mixing methods in a qualitatively driven way. Qual. Res. 6 (1), 9–25 (2006)

Miles, M. B., Huberman, A. M., Saldaña, J.: Qualitative data analysis: A methods sourcebook (4th edit.). Sage, Los Angeles, (2018)

Miles, M.B., Huberman, A.M.: Qualitative data analysis: An expanded sourcebook. Sage, Thousand Oaks (1994)

Moore, N.: How to do research: a practical guide to designing and managing research projects, 3rd edn. Facet, London (2006)

Morse, J.M.: Approaches to qualitative-quantitative methodological triangulation. Nurs. Res. 40 (2), 120–123 (1991)

Olssen, M.: Radical constructivism and its failings: anti-realism and individualism. Br. J. Educ. Stud. 44 (3), 275–295 (1996)

Onwuegbuzie, A.J., Leech, N.L.: Taking the “Q” out of research: teaching research methodology courses without the divide between quantitative and qualitative paradigms. Qual. Quant. 39 (3), 267–295 (2005)

Pilcher, N., Cortazzi, M.: Dialogues: QUANT researchers on QUAL methods. Qual. Report 21 (3), 450–473 (2016)

Punch, K.: Introduction to social research quantitative and qualitative approaches. Sage, Thousand Oaks (2005)

Saunders, B., Sim, J., Kingstone, T., Baker, S., Waterfield, J., Bartlam, B., Jinks, C.: Saturation in qualitative research: exploring its conceptualization and operationalization. Qualit. Quant. 52 (4), 1893–1907 (2018)

Seale, C.: Quality in qualitative research. Qual. Inq. 5 , 465–478 (1999a)

Seale, C.: The Quality of Qualitative Research. Sage, London (1999b)

Sharma, G.: Pros and cons of different sampling techniques. Int. J. Appl. Res. 3 (7), 749–752 (2017)

St Pierre, E.A.: Refusing alternatives: a science of contestation. Qual. Inq. 10 (1), 130–139 (2004)

Tashakkori, A., Teddlie, C., Teddlie, C.B.: Mixed methodology: Combining qualitative and quantitative approaches. Sage, Thousand Oaks (1998)

Teddlie, C., Tashakkori, A. Mixed methods research. Contemporary Issues in an emerging Field. In Denzin, N.K., Lincoln, Y.S. (eds.) The Sage handbook of qualitative research (4 th edit.), pp. 285–300. Sage, Thousand Oaks, (2011)

Trafford, V., Leshem, S.: Starting at the end to undertake doctoral research: predictable questions as stepping stones. High. Educ. Rev. 35 (1), 31–49 (2002)

Uher, J.: Functions of units, scales and quantitative data: fundamental differences in numerical traceability between sciences. Qual. Quant. 56 (4), 2519–2548 (2022)

Merriam Webster: Definition of ‘semi’. (2022). Available at https://www.merriam-webster.com/dictionary/semi

Download references

The authors declare that no funds, grants, or other support were received during the preparation of this manuscript.

Author information

Authors and affiliations.

The Business School, Edinburgh Napier University, Edinburgh, UK

Nick Pilcher

University of Warwick, Coventry, UK

Martin Cortazzi

You can also search for this author in PubMed   Google Scholar

Contributions

All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by Nick Pilcher and Martin Cortazzi. The first draft of the manuscript was written by NP along with MC and all authors commented on previous versions of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Nick Pilcher .

Ethics declarations

Conflict of interest.

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix 1: Literature review study

The table below contains details of the binary representations and possibilities in the two columns on the left and in the right it contains the numbers of the key sources that conveyed or adhered to these binary representations. The details of these sources and their respective numbers are listed below.

Table: Textbook and key source binary representations

Bell, J., & Waters, S. (2014). Doing your research Project: A Guide for first-time researchers. McGraw-Hill Education (UK). 6 th edn

Bloor, M., & Wood, F. (2006). Keywords in qualitative methods: A vocabulary of research concepts. London, UK: Sage Publications.

Bryman, A. (2008). Social research methods. Oxford, UK: Oxford University Press. [with caveats for many but still using the divide as ‘useful’]

Bryman, A., & Cramer, D. (2009). Quantitative data analysis with SPSS 14, 15 and 16: A guide for social scientists. London, UK: Routledge.

Ceglowski, D., Bacigalupa, C., & Peck, E. (2011). Aced out: Censorship of qualitative research in the age of "scientifically based research." Qualitative Inquiry, 17(8), 679–686.

Daly, K. J. (2007). Qualitative Methods for Family Studies and Human Development. London, UK: Sage.

Davies, M. B., & Hughes, N. (2014).  Doing a successful research project: Using qualitative or quantitative methods . Bloomsbury Publishing.

Dawson, C. (2019).  Introduction to Research Methods 5th Edition: A Practical Guide for Anyone Undertaking a Research Project . Robinson.

Denzin, N. K., & Lincoln, Y. S. (Eds.). (1998). The landscape of qualitative research: Theories and issues. Thousand Oaks, CA: Sage Publications. [with caveat that original qual was positivist in root but not now]

Denzin and Lincoln (2011) Introduction: The Discipline and Practice of Qualitative Research. In Denzin, N. K., & Lincoln, Y. S. (2011). The Sage handbook of qualitative research . Thousand Oaks, Calif: Sage. Pp1-20

Goertz, G., & Mahoney, J. (2012).  A tale of two cultures . Princeton University Press.

Grix, J. (2004). The foundations of research. New York, NY: Palgrave Macmillan.

Hammersley, M. (2007). The issue of quality in qualitative research. International Journal of Research & Method in Education, 30(3), 287–305.

Hammersley, M. (2013). What is qualitative research? London, UK: Bloomsbury Academic. [caveat that some qual do use causal analysis – and if you mix you abandon key assumptions associated with qualitative work]

Harman, W. W. (1996). The shortcomings of western science. Qualitative Inquiry, 2(1), 30–38.

Howe, K. R. (2011). Mixed methods, mixed causes? Qualitative Inquiry, 17(2), 166–171.

Mason, J. (2006). Mixing methods in a qualitatively driven way. Qualitative Research, 6(1), 9–25.

Miles, M. B., Huberman, A. M., & Saldaña, J. (2018).  Qualitative data analysis: A methods sourcebook . Sage publications.

Punch, K. (2005). Introduction to Social Research Quantitative and Qualitative Approaches. Sage.

Sandelowski, M. (1997). "To be of use": Enhancing the utility of qualitative research. Nursing Outlook, 45(3), 125–132 [caveat – does rebut many of the ideas but nevertheless outlines them as how the two are seen – e.g. of generalizability]

Seale, C. (1999). Quality in qualitative research. Qualitative Inquiry, 5, 465–478.

Silverman, D. (2016). Introducing qualitative research.  Qualitative research ,  3 (3), 14–25.

Tashakkori, A., Teddlie, C., & Teddlie, C. B. (1998).  Mixed methodology: Combining qualitative and quantitative approaches  (Vol. 46). sage. [with the caveat that they talk about the differences as existing even though say they are not that wide]

Teddlie, C., & Tashakkori, A. (2011). Mixed methods research. Contemporary Issues in an emerging Field. in The Sage handbook of qualitative research ,  4 , 285–300.

Torrance, H. (2008). Building confidence in qualitative research: Engaging the demands of policy. Qualitative Inquiry, 14(4), 507–527.

1.1 Sources in languages other than English, and brief notes regarding their focus and content

Whilst not part of the literature review study, we also consulted the outline details, abstracts and contents lists of a number of sources in languages other than English. We put brief notes about after each source. Each source, unless specifically noted, adhered to similar binary treatment of quantitative and qualitative methods and approaches as the English language sources outlined above.

1.1.1 German

Blandz, M. (2021) Forschungsmethoden und Statistik für die Soziale Arbeit : Grundlage und Anwendingen. 2 nd . edit. Stuttgart: Kohlhammer Verlag. – this is a multidisciplinary source that focuses mostly on quantitative and mixed methods. It follows the suggestion that a qualitative study can be a preliminary study for the main quantitative study.

Caspari, D; Klippel, F; Legutke, M. & Schram, K. (2022) Forschungsmethoden: in der Fremdsprachendidaktik; Ein Handbuch. Tübingen: Narr Franke Altempo Verlag. [Focused on foreign language teaching, details quantitative, then qualitative and then mixed; all separately]

Dōring, N. (2023) Forschungsmethoden und Evaluation in den Sozial- und Humanwissenschaften. 6. th edit. Berlin: Springer. [Focused on the Social Sciences and humanities; as with the previous source it has separate chapters on quantitative and qualitative and a section on mixed, and contains some critical commentary]

Frankenberger, N. (Ed.) (2022) Grundlagen der Politikwissenschaft : Forschungsmethoden und Forschendes Lernen. Stuttgart: Kohlhammer Verlag. [Political science focused and based around distinctions between quantitative and qualitative approaches, each of which is elaborated with different methods; there is no obvious section on mixed methods]

Hussy, W; Schiener, M; Echterhoff, G. (2013) Forschungsmethoden in Psychologie und Sozialwissenschaften für Bachelor. Berlin: Springer. [This book is focused on psychology and social sciences for undergraduates. It has separate parts to focus on quantitative and on qualitative and then a chapter on mixed, identifying mixed methods as an emerging trend]

Niederberger, M. & Finne, E. (Eds.) (2021) Forschungsmethoden in der Gesundsheitsfōrderung und Prävention. Berlin: Springer. [Focused on Health and wellbeing; develops the roles of quantitative, qualitative and mixed (in combinations) in multidisciplinary, interdisciplinary and transdisciplinary research. Notes much research is exclusively quantitative and that social sciences are more qualitative or mixed. Makes the argument that the quantitative versus qualitative divide was surpassed by ‘post-positivist’ versus ‘combined’ thinking and that integrated approaches are now widely accepted]

1.1.2 Spanish

Campos-Arenas, A. (2014) Métodos mixtos de investigación. Bogota: Magisterio Editorial. [Social science focused; devoted to mixed or combined approaches in Latin American contexts]

Hernandez-Sampieri, R. & Mendoza Torres, C. P. (2018) Metodología de investigación: Las rutas cuantitativa , cualitativa y mixta. Mexico: McGrw-Hill. [Social science focused with an introduction and conclusion focused on ‘three routes to research’ that are exceptionally and thoroughly elaborated; quantitative given 8 chapters; qualitative 3 and mixed just one]

Léon-García, O. G. & Carda-Celay, I. M. (2020) Méthodos de investigación en psicología y educación: Las tradiciones cuantitativas y qualitativas. 5. th edit. Barcelona : McGraw-Hill, España. [Psychology and education focused; based on relatively clearly cut distinctinos between ‘the two traditions’ of quantitative and qualitative]

Molina Marin, G. (Ed.) (2020) Integración de métodos de investigación : Estrategias metodológicas u experiencias en salud pública. Bogotá: Universidad de Antioquia. [Public health focused; gives most attention to multi-method combinations and asks questions about the epistemological integrity of integrating different approaches]

Ortega-Sanchez, D, (Ed.) (2023) ¿Como investigar en didáctica de las ciencias sociales? Fundamentos metodológicos , técnicas e instrumentos de investigación. Barcelona: Octaedro. [Education, research, pedagogy of teaching social sciences; focused on quantitative, qualitative and mixed methods in Spanish contexts]

Páramo-Reales, D. (2020) Métodos de investigación caulitativa : Fundamentos y aplicaciones . Bogota: Editorial Unimagdalena. [Social sciences: basic applications of qualitative approaches in Latin America]

Ponce, O. A. (2014) Investigación de métodos mixtos en educación, 2. nd edit. San Jaun: Publicaciones Puertoriqueñas. [Education and Pedagogy; Puerto Rican context and entirely about mixed methods]

Vasilachis de Giradino, I. (Ed.) (2009) Estrategias de investigación cauitativa. Barcelona: Editorial Gedisa. [Social sciences; much detail on research design; focus exclusively on qualitative methods in Spanish contexts]

1.1.3 French

Bouchard, S. & Cyr, C. (Eds.) (2005) Reserche psycosocial pour harmoniser reserche st pratique. Quebec: Prese de la Université de Quebec. [Focused on psychology and sociology. Despite its title about ‘harmonizing’ research it is mainly focused on quantitative approaches, with a small section on qualitative and nothing on mixed approaches]

Corbière, M. & Lamviere, N. (2021) Méthodes quantitatives , qualitatives et mixtes , dans la reserche en sciences humaines et de la santé. 2. nd edit. Quebec : PU Quebec. [Focused on Humanities and health care; highlights the division between quantitative, qualitative and mixed methods]

Devin, G. (Ed.) (2016) Méthodes de recherche en relations internationals. Paris: Sciences Po. [Focused on politics and international relations; mostly wholly focused on quantitative; only a little on qualitative]

Gavard-Perret, M.L; Gotteland, D; Haon, C. & Jolibert, A. (2018) Methodologie de la recherche en sciences de gestion : Réussir son mémoire ou sa these. Paris: Pearson. [Business and management focused and geared towards thesis research; notes clear distinctions between quantitative and qualitative approaches with nothing on mixed]

Komu, S. C. S. (2020) Le receuil des méthodes en sciences sociales : Mèthodo;ogies en reserche. Manitoba: Sciences Script. [Social sciences focused; mostly quantitative methods with some attention to focus groups and participatory research]

Lepillier, O; Fournier, T; Bricas, N. & Figuié, M. (2011 ) Méthodes d’investigation de l’alimentation et des mangeurs. Versailles: Editions Quae. [Focused on nutrition, health studies and diet; details quantitative and qualitative methods and has very little on mixed]

Millette, M; Millerand, F; Myles, D. & Latako-Toth, T. (2021) Méthodes de reserches en contexte humanique , une orientation qualiificative. Montreal: PU Montreal. [Humanities focused; outlines quantitative and qualitative methods and, unusually, attends to ‘qualitative investigations in numerical contexts’ in Canada]

Moscarda, J. (2018) Faire parler les donées: Méthodologies quantitatives et qualitatives. Caen: Editions EMS. [Has a multidisciplinary focus on ‘let the data talk’; deals with quantitative methods and then qualitative methods and also mixed]

Vallerand, R. J. (2000) Méthodes de recherche en psychologie. Quebec: Gaetan Morin. [Focused on psychology; emphasis on quantitative research; brief section on qualitative; Canadian contexts]

Appendix 2: Interview study schedule

2.1 understandings of ‘qualitative’ and ‘quantitative’.

This research project is exploratory and intends to delve into understandings of the specific terms ‘quantitative’ and ‘qualitative’ as they are perceived, used, and interpreted by researchers in very different fields. Such research is intended to shed light on the fields of quantitative and qualitative research. The idea for the research arises from a previous project where the researcher interviewed quantitative focused researchers and saw the use of qualitative and quantitative being used and interpreted very differently to how the terms are presented and understood in the research methods literature. It is expected that exploring these understandings further will add to the field by shedding light on the subtleties of how they are used and also in turn help researchers make informed decisions about the optimum approaches and methods to use in their own research.

2.2 Interview questions

figure a

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Pilcher, N., Cortazzi, M. 'Qualitative' and 'quantitative' methods and approaches across subject fields: implications for research values, assumptions, and practices. Qual Quant 58 , 2357–2387 (2024). https://doi.org/10.1007/s11135-023-01734-4

Download citation

Accepted : 21 August 2023

Published : 30 September 2023

Issue Date : June 2024

DOI : https://doi.org/10.1007/s11135-023-01734-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative
  • Quantitative
  • Assumptions
  • Disciplines
  • Semi-structured interviews
  • Find a journal
  • Publish with us
  • Track your research

An official website of the United States government

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock Locked padlock icon ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List

Journal of the Advanced Practitioner in Oncology logo

Understanding and Evaluating Survey Research

Julie ponto , phd, aprn, agcns-bc, aocns®.

  • Author information
  • Article notes
  • Copyright and License information

Correspondence to: Julie Ponto, PhD, APRN, AGCNS-BC, AOCNS®, Winona State University, Graduate Programs in Nursing, 859 30th Avenue South East, Rochester, MN 55904. E-mail: [email protected]

Issue date 2015 Mar-Apr.

This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited and is for non-commercial purposes.

A variety of methodologic approaches exist for individuals interested in conducting research. Selection of a research approach depends on a number of factors, including the purpose of the research, the type of research questions to be answered, and the availability of resources. The purpose of this article is to describe survey research as one approach to the conduct of research so that the reader can critically evaluate the appropriateness of the conclusions from studies employing survey research.

SURVEY RESEARCH

Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" ( Check & Schutt, 2012, p. 160 ). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative research strategies (e.g., using questionnaires with numerically rated items), qualitative research strategies (e.g., using open-ended questions), or both strategies (i.e., mixed methods). As it is often used to describe and explore human behavior, surveys are therefore frequently used in social and psychological research ( Singleton & Straits, 2009 ).

Information has been obtained from individuals and groups through the use of survey research for decades. It can range from asking a few targeted questions of individuals on a street corner to obtain information related to behaviors and preferences, to a more rigorous study using multiple valid and reliable instruments. Common examples of less rigorous surveys include marketing or political surveys of consumer patterns and public opinion polls.

Survey research has historically included large population-based data collection. The primary purpose of this type of survey research was to obtain information describing characteristics of a large sample of individuals of interest relatively quickly. Large census surveys obtaining information reflecting demographic and personal characteristics and consumer feedback surveys are prime examples. These surveys were often provided through the mail and were intended to describe demographic characteristics of individuals or obtain opinions on which to base programs or products for a population or group.

More recently, survey research has developed into a rigorous approach to research, with scientifically tested strategies detailing who to include (representative sample), what and how to distribute (survey method), and when to initiate the survey and follow up with nonresponders (reducing nonresponse error), in order to ensure a high-quality research process and outcome. Currently, the term "survey" can reflect a range of research aims, sampling and recruitment strategies, data collection instruments, and methods of survey administration.

Given this range of options in the conduct of survey research, it is imperative for the consumer/reader of survey research to understand the potential for bias in survey research as well as the tested techniques for reducing bias, in order to draw appropriate conclusions about the information reported in this manner. Common types of error in research, along with the sources of error and strategies for reducing error as described throughout this article, are summarized in the Table .

Table 1

Sources of Error in Survey Research and Strategies to Reduce Error

The goal of sampling strategies in survey research is to obtain a sufficient sample that is representative of the population of interest. It is often not feasible to collect data from an entire population of interest (e.g., all individuals with lung cancer); therefore, a subset of the population or sample is used to estimate the population responses (e.g., individuals with lung cancer currently receiving treatment). A large random sample increases the likelihood that the responses from the sample will accurately reflect the entire population. In order to accurately draw conclusions about the population, the sample must include individuals with characteristics similar to the population.

It is therefore necessary to correctly identify the population of interest (e.g., individuals with lung cancer currently receiving treatment vs. all individuals with lung cancer). The sample will ideally include individuals who reflect the intended population in terms of all characteristics of the population (e.g., sex, socioeconomic characteristics, symptom experience) and contain a similar distribution of individuals with those characteristics. As discussed by Mady Stovall beginning on page 162, Fujimori et al. ( 2014 ), for example, were interested in the population of oncologists. The authors obtained a sample of oncologists from two hospitals in Japan. These participants may or may not have similar characteristics to all oncologists in Japan.

Participant recruitment strategies can affect the adequacy and representativeness of the sample obtained. Using diverse recruitment strategies can help improve the size of the sample and help ensure adequate coverage of the intended population. For example, if a survey researcher intends to obtain a sample of individuals with breast cancer representative of all individuals with breast cancer in the United States, the researcher would want to use recruitment strategies that would recruit both women and men, individuals from rural and urban settings, individuals receiving and not receiving active treatment, and so on. Because of the difficulty in obtaining samples representative of a large population, researchers may focus the population of interest to a subset of individuals (e.g., women with stage III or IV breast cancer). Large census surveys require extremely large samples to adequately represent the characteristics of the population because they are intended to represent the entire population.

DATA COLLECTION METHODS

Survey research may use a variety of data collection methods with the most common being questionnaires and interviews. Questionnaires may be self-administered or administered by a professional, may be administered individually or in a group, and typically include a series of items reflecting the research aims. Questionnaires may include demographic questions in addition to valid and reliable research instruments ( Costanzo, Stawski, Ryff, Coe, & Almeida, 2012 ; DuBenske et al., 2014 ; Ponto, Ellington, Mellon, & Beck, 2010 ). It is helpful to the reader when authors describe the contents of the survey questionnaire so that the reader can interpret and evaluate the potential for errors of validity (e.g., items or instruments that do not measure what they are intended to measure) and reliability (e.g., items or instruments that do not measure a construct consistently). Helpful examples of articles that describe the survey instruments exist in the literature ( Buerhaus et al., 2012 ).

Questionnaires may be in paper form and mailed to participants, delivered in an electronic format via email or an Internet-based program such as SurveyMonkey, or a combination of both, giving the participant the option to choose which method is preferred ( Ponto et al., 2010 ). Using a combination of methods of survey administration can help to ensure better sample coverage (i.e., all individuals in the population having a chance of inclusion in the sample) therefore reducing coverage error ( Dillman, Smyth, & Christian, 2014 ; Singleton & Straits, 2009 ). For example, if a researcher were to only use an Internet-delivered questionnaire, individuals without access to a computer would be excluded from participation. Self-administered mailed, group, or Internet-based questionnaires are relatively low cost and practical for a large sample ( Check & Schutt, 2012 ).

Dillman et al. ( 2014 ) have described and tested a tailored design method for survey research. Improving the visual appeal and graphics of surveys by using a font size appropriate for the respondents, ordering items logically without creating unintended response bias, and arranging items clearly on each page can increase the response rate to electronic questionnaires. Attending to these and other issues in electronic questionnaires can help reduce measurement error (i.e., lack of validity or reliability) and help ensure a better response rate.

Conducting interviews is another approach to data collection used in survey research. Interviews may be conducted by phone, computer, or in person and have the benefit of visually identifying the nonverbal response(s) of the interviewee and subsequently being able to clarify the intended question. An interviewer can use probing comments to obtain more information about a question or topic and can request clarification of an unclear response ( Singleton & Straits, 2009 ). Interviews can be costly and time intensive, and therefore are relatively impractical for large samples.

Some authors advocate for using mixed methods for survey research when no one method is adequate to address the planned research aims, to reduce the potential for measurement and non-response error, and to better tailor the study methods to the intended sample ( Dillman et al., 2014 ; Singleton & Straits, 2009 ). For example, a mixed methods survey research approach may begin with distributing a questionnaire and following up with telephone interviews to clarify unclear survey responses ( Singleton & Straits, 2009 ). Mixed methods might also be used when visual or auditory deficits preclude an individual from completing a questionnaire or participating in an interview.

FUJIMORI ET AL.: SURVEY RESEARCH

Fujimori et al. ( 2014 ) described the use of survey research in a study of the effect of communication skills training for oncologists on oncologist and patient outcomes (e.g., oncologist’s performance and confidence and patient’s distress, satisfaction, and trust). A sample of 30 oncologists from two hospitals was obtained and though the authors provided a power analysis concluding an adequate number of oncologist participants to detect differences between baseline and follow-up scores, the conclusions of the study may not be generalizable to a broader population of oncologists. Oncologists were randomized to either an intervention group (i.e., communication skills training) or a control group (i.e., no training).

Fujimori et al. ( 2014 ) chose a quantitative approach to collect data from oncologist and patient participants regarding the study outcome variables. Self-report numeric ratings were used to measure oncologist confidence and patient distress, satisfaction, and trust. Oncologist confidence was measured using two instruments each using 10-point Likert rating scales. The Hospital Anxiety and Depression Scale (HADS) was used to measure patient distress and has demonstrated validity and reliability in a number of populations including individuals with cancer ( Bjelland, Dahl, Haug, & Neckelmann, 2002 ). Patient satisfaction and trust were measured using 0 to 10 numeric rating scales. Numeric observer ratings were used to measure oncologist performance of communication skills based on a videotaped interaction with a standardized patient. Participants completed the same questionnaires at baseline and follow-up.

The authors clearly describe what data were collected from all participants. Providing additional information about the manner in which questionnaires were distributed (i.e., electronic, mail), the setting in which data were collected (e.g., home, clinic), and the design of the survey instruments (e.g., visual appeal, format, content, arrangement of items) would assist the reader in drawing conclusions about the potential for measurement and nonresponse error. The authors describe conducting a follow-up phone call or mail inquiry for nonresponders, using the Dillman et al. ( 2014 ) tailored design for survey research follow-up may have reduced nonresponse error.

CONCLUSIONS

Survey research is a useful and legitimate approach to research that has clear benefits in helping to describe and explore variables and constructs of interest. Survey research, like all research, has the potential for a variety of sources of error, but several strategies exist to reduce the potential for error. Advanced practitioners aware of the potential sources of error and strategies to improve survey research can better determine how and whether the conclusions from a survey research study apply to practice.

The author has no potential conflicts of interest to disclose.

  • 1. Bjelland Ingvar, Dahl Alv A, Haug Tone Tangen, Neckelmann Dag. The validity of the Hospital Anxiety and Depression Scale. An updated literature review. Journal of psychosomatic research. 2002;52:69–77. doi: 10.1016/s0022-3999(01)00296-3. [ DOI ] [ PubMed ] [ Google Scholar ]
  • Buerhaus P. I., DesRoches C., Applebaum S., Hess R., Norman L. D., Donelan K. Are nurses ready for health care reform? A decade of survey research . Nursing Economics. 2012;30:318–330. [ PubMed ] [ Google Scholar ]
  • 3. Check J., Schutt R. K. Survey research. In: J. Check, R. K. Schutt., editors. Research methods in education. Thousand Oaks, CA:: Sage Publications; 2012. pp. 159–185. [ Google Scholar ]
  • 4. Costanzo Erin S, Stawski Robert S, Ryff Carol D, Coe Christopher L, Almeida David M. Cancer survivors’ responses to daily stressors: implications for quality of life. Health psychology : official journal of the Division of Health Psychology, American Psychological Association. 2012;31:360–370. doi: 10.1037/a0027018. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 5. Dillman D. A., Smyth J. D., Christian L. M. Internet, phone, mail, and mixed-mode surveys: The tailored design method. Hoboken, NJ: John Wiley & Sons, Inc; 2014. [ Google Scholar ]
  • 6. DuBenske Lori L, Gustafson David H, Namkoong Kang, Hawkins Robert P, Atwood Amy K, Brown Roger L, Chih Ming-Yuan, McTavish Fiona, Carmack Cindy L, Buss Mary K, Govindan Ramaswamy, Cleary James F. CHESS improves cancer caregivers’ burden and mood: results of an eHealth RCT. Health psychology : official journal of the Division of Health Psychology, American Psychological Association. 2014;33:1261–1272. doi: 10.1037/a0034216. [ DOI ] [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • 7. Fujimori Maiko, Shirai Yuki, Asai Mariko, Kubota Kaoru, Katsumata Noriyuki, Uchitomi Yosuke. Effect of communication skills training program for oncologists based on patient preferences for communication when receiving bad news: a randomized controlled trial. Journal of clinical oncology : official journal of the American Society of Clinical Oncology. 2014;32:2166–2172. doi: 10.1200/JCO.2013.51.2756. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 8. Ponto Julie Ann, Ellington Lee, Mellon Suzanne, Beck Susan L. Predictors of adjustment and growth in women with recurrent ovarian cancer. Oncology nursing forum. 2010;37:357–364. doi: 10.1188/10.ONF.357-364. [ DOI ] [ PubMed ] [ Google Scholar ]
  • 9. Singleton R. A., Straits B. C. Approaches to social research . New York: Oxford University Press; 2009. [ Google Scholar ]
  • PDF (104.6 KB)
  • Collections

Similar articles

Cited by other articles, links to ncbi databases.

  • Download .nbib .nbib
  • Format: AMA APA MLA NLM

Add to Collections

  • Methodology
  • Open access
  • Published: 11 October 2016

Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research

  • Stephen J. Gentles 1 , 4 ,
  • Cathy Charles 1 ,
  • David B. Nicholas 2 ,
  • Jenny Ploeg 3 &
  • K. Ann McKibbon 1  

Systematic Reviews volume  5 , Article number:  172 ( 2016 ) Cite this article

57k Accesses

27 Citations

13 Altmetric

Metrics details

Overviews of methods are potentially useful means to increase clarity and enhance collective understanding of specific methods topics that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness. This type of review represents a distinct literature synthesis method, although to date, its methodology remains relatively undeveloped despite several aspects that demand unique review procedures. The purpose of this paper is to initiate discussion about what a rigorous systematic approach to reviews of methods, referred to here as systematic methods overviews , might look like by providing tentative suggestions for approaching specific challenges likely to be encountered. The guidance offered here was derived from experience conducting a systematic methods overview on the topic of sampling in qualitative research.

The guidance is organized into several principles that highlight specific objectives for this type of review given the common challenges that must be overcome to achieve them. Optional strategies for achieving each principle are also proposed, along with discussion of how they were successfully implemented in the overview on sampling. We describe seven paired principles and strategies that address the following aspects: delimiting the initial set of publications to consider, searching beyond standard bibliographic databases, searching without the availability of relevant metadata, selecting publications on purposeful conceptual grounds, defining concepts and other information to abstract iteratively, accounting for inconsistent terminology used to describe specific methods topics, and generating rigorous verifiable analytic interpretations. Since a broad aim in systematic methods overviews is to describe and interpret the relevant literature in qualitative terms, we suggest that iterative decision making at various stages of the review process, and a rigorous qualitative approach to analysis are necessary features of this review type.

Conclusions

We believe that the principles and strategies provided here will be useful to anyone choosing to undertake a systematic methods overview. This paper represents an initial effort to promote high quality critical evaluations of the literature regarding problematic methods topics, which have the potential to promote clearer, shared understandings, and accelerate advances in research methods. Further work is warranted to develop more definitive guidance.

Peer Review reports

While reviews of methods are not new, they represent a distinct review type whose methodology remains relatively under-addressed in the literature despite the clear implications for unique review procedures. One of few examples to describe it is a chapter containing reflections of two contributing authors in a book of 21 reviews on methodological topics compiled for the British National Health Service, Health Technology Assessment Program [ 1 ]. Notable is their observation of how the differences between the methods reviews and conventional quantitative systematic reviews, specifically attributable to their varying content and purpose, have implications for defining what qualifies as systematic. While the authors describe general aspects of “systematicity” (including rigorous application of a methodical search, abstraction, and analysis), they also describe a high degree of variation within the category of methods reviews itself and so offer little in the way of concrete guidance. In this paper, we present tentative concrete guidance, in the form of a preliminary set of proposed principles and optional strategies, for a rigorous systematic approach to reviewing and evaluating the literature on quantitative or qualitative methods topics. For purposes of this article, we have used the term systematic methods overview to emphasize the notion of a systematic approach to such reviews.

The conventional focus of rigorous literature reviews (i.e., review types for which systematic methods have been codified, including the various approaches to quantitative systematic reviews [ 2 – 4 ], and the numerous forms of qualitative and mixed methods literature synthesis [ 5 – 10 ]) is to synthesize empirical research findings from multiple studies. By contrast, the focus of overviews of methods, including the systematic approach we advocate, is to synthesize guidance on methods topics. The literature consulted for such reviews may include the methods literature, methods-relevant sections of empirical research reports, or both. Thus, this paper adds to previous work published in this journal—namely, recent preliminary guidance for conducting reviews of theory [ 11 ]—that has extended the application of systematic review methods to novel review types that are concerned with subject matter other than empirical research findings.

Published examples of methods overviews illustrate the varying objectives they can have. One objective is to establish methodological standards for appraisal purposes. For example, reviews of existing quality appraisal standards have been used to propose universal standards for appraising the quality of primary qualitative research [ 12 ] or evaluating qualitative research reports [ 13 ]. A second objective is to survey the methods-relevant sections of empirical research reports to establish current practices on methods use and reporting practices, which Moher and colleagues [ 14 ] recommend as a means for establishing the needs to be addressed in reporting guidelines (see, for example [ 15 , 16 ]). A third objective for a methods review is to offer clarity and enhance collective understanding regarding a specific methods topic that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness within the available methods literature. An example of this is a overview whose objective was to review the inconsistent definitions of intention-to-treat analysis (the methodologically preferred approach to analyze randomized controlled trial data) that have been offered in the methods literature and propose a solution for improving conceptual clarity [ 17 ]. Such reviews are warranted because students and researchers who must learn or apply research methods typically lack the time to systematically search, retrieve, review, and compare the available literature to develop a thorough and critical sense of the varied approaches regarding certain controversial or ambiguous methods topics.

While systematic methods overviews , as a review type, include both reviews of the methods literature and reviews of methods-relevant sections from empirical study reports, the guidance provided here is primarily applicable to reviews of the methods literature since it was derived from the experience of conducting such a review [ 18 ], described below. To our knowledge, there are no well-developed proposals on how to rigorously conduct such reviews. Such guidance would have the potential to improve the thoroughness and credibility of critical evaluations of the methods literature, which could increase their utility as a tool for generating understandings that advance research methods, both qualitative and quantitative. Our aim in this paper is thus to initiate discussion about what might constitute a rigorous approach to systematic methods overviews. While we hope to promote rigor in the conduct of systematic methods overviews wherever possible, we do not wish to suggest that all methods overviews need be conducted to the same standard. Rather, we believe that the level of rigor may need to be tailored pragmatically to the specific review objectives, which may not always justify the resource requirements of an intensive review process.

The example systematic methods overview on sampling in qualitative research

The principles and strategies we propose in this paper are derived from experience conducting a systematic methods overview on the topic of sampling in qualitative research [ 18 ]. The main objective of that methods overview was to bring clarity and deeper understanding of the prominent concepts related to sampling in qualitative research (purposeful sampling strategies, saturation, etc.). Specifically, we interpreted the available guidance, commenting on areas lacking clarity, consistency, or comprehensiveness (without proposing any recommendations on how to do sampling). This was achieved by a comparative and critical analysis of publications representing the most influential (i.e., highly cited) guidance across several methodological traditions in qualitative research.

The specific methods and procedures for the overview on sampling [ 18 ] from which our proposals are derived were developed both after soliciting initial input from local experts in qualitative research and an expert health librarian (KAM) and through ongoing careful deliberation throughout the review process. To summarize, in that review, we employed a transparent and rigorous approach to search the methods literature, selected publications for inclusion according to a purposeful and iterative process, abstracted textual data using structured abstraction forms, and analyzed (synthesized) the data using a systematic multi-step approach featuring abstraction of text, summary of information in matrices, and analytic comparisons.

For this article, we reflected on both the problems and challenges encountered at different stages of the review and our means for selecting justifiable procedures to deal with them. Several principles were then derived by considering the generic nature of these problems, while the generalizable aspects of the procedures used to address them formed the basis of optional strategies. Further details of the specific methods and procedures used in the overview on qualitative sampling are provided below to illustrate both the types of objectives and challenges that reviewers will likely need to consider and our approach to implementing each of the principles and strategies.

Organization of the guidance into principles and strategies

For the purposes of this article, principles are general statements outlining what we propose are important aims or considerations within a particular review process, given the unique objectives or challenges to be overcome with this type of review. These statements follow the general format, “considering the objective or challenge of X, we propose Y to be an important aim or consideration.” Strategies are optional and flexible approaches for implementing the previous principle outlined. Thus, generic challenges give rise to principles, which in turn give rise to strategies.

We organize the principles and strategies below into three sections corresponding to processes characteristic of most systematic literature synthesis approaches: literature identification and selection ; data abstraction from the publications selected for inclusion; and analysis , including critical appraisal and synthesis of the abstracted data. Within each section, we also describe the specific methodological decisions and procedures used in the overview on sampling in qualitative research [ 18 ] to illustrate how the principles and strategies for each review process were applied and implemented in a specific case. We expect this guidance and accompanying illustrations will be useful for anyone considering engaging in a methods overview, particularly those who may be familiar with conventional systematic review methods but may not yet appreciate some of the challenges specific to reviewing the methods literature.

Results and discussion

Literature identification and selection.

The identification and selection process includes search and retrieval of publications and the development and application of inclusion and exclusion criteria to select the publications that will be abstracted and analyzed in the final review. Literature identification and selection for overviews of the methods literature is challenging and potentially more resource-intensive than for most reviews of empirical research. This is true for several reasons that we describe below, alongside discussion of the potential solutions. Additionally, we suggest in this section how the selection procedures can be chosen to match the specific analytic approach used in methods overviews.

Delimiting a manageable set of publications

One aspect of methods overviews that can make identification and selection challenging is the fact that the universe of literature containing potentially relevant information regarding most methods-related topics is expansive and often unmanageably so. Reviewers are faced with two large categories of literature: the methods literature , where the possible publication types include journal articles, books, and book chapters; and the methods-relevant sections of empirical study reports , where the possible publication types include journal articles, monographs, books, theses, and conference proceedings. In our systematic overview of sampling in qualitative research, exhaustively searching (including retrieval and first-pass screening) all publication types across both categories of literature for information on a single methods-related topic was too burdensome to be feasible. The following proposed principle follows from the need to delimit a manageable set of literature for the review.

Principle #1:

Considering the broad universe of potentially relevant literature, we propose that an important objective early in the identification and selection stage is to delimit a manageable set of methods-relevant publications in accordance with the objectives of the methods overview.

Strategy #1:

To limit the set of methods-relevant publications that must be managed in the selection process, reviewers have the option to initially review only the methods literature, and exclude the methods-relevant sections of empirical study reports, provided this aligns with the review’s particular objectives.

We propose that reviewers are justified in choosing to select only the methods literature when the objective is to map out the range of recognized concepts relevant to a methods topic, to summarize the most authoritative or influential definitions or meanings for methods-related concepts, or to demonstrate a problematic lack of clarity regarding a widely established methods-related concept and potentially make recommendations for a preferred approach to the methods topic in question. For example, in the case of the methods overview on sampling [ 18 ], the primary aim was to define areas lacking in clarity for multiple widely established sampling-related topics. In the review on intention-to-treat in the context of missing outcome data [ 17 ], the authors identified a lack of clarity based on multiple inconsistent definitions in the literature and went on to recommend separating the issue of how to handle missing outcome data from the issue of whether an intention-to-treat analysis can be claimed.

In contrast to strategy #1, it may be appropriate to select the methods-relevant sections of empirical study reports when the objective is to illustrate how a methods concept is operationalized in research practice or reported by authors. For example, one could review all the publications in 2 years’ worth of issues of five high-impact field-related journals to answer questions about how researchers describe implementing a particular method or approach, or to quantify how consistently they define or report using it. Such reviews are often used to highlight gaps in the reporting practices regarding specific methods, which may be used to justify items to address in reporting guidelines (for example, [ 14 – 16 ]).

It is worth recognizing that other authors have advocated broader positions regarding the scope of literature to be considered in a review, expanding on our perspective. Suri [ 10 ] (who, like us, emphasizes how different sampling strategies are suitable for different literature synthesis objectives) has, for example, described a two-stage literature sampling procedure (pp. 96–97). First, reviewers use an initial approach to conduct a broad overview of the field—for reviews of methods topics, this would entail an initial review of the research methods literature. This is followed by a second more focused stage in which practical examples are purposefully selected—for methods reviews, this would involve sampling the empirical literature to illustrate key themes and variations. While this approach is seductive in its capacity to generate more in depth and interpretive analytic findings, some reviewers may consider it too resource-intensive to include the second step no matter how selective the purposeful sampling. In the overview on sampling where we stopped after the first stage [ 18 ], we discussed our selective focus on the methods literature as a limitation that left opportunities for further analysis of the literature. We explicitly recommended, for example, that theoretical sampling was a topic for which a future review of the methods sections of empirical reports was justified to answer specific questions identified in the primary review.

Ultimately, reviewers must make pragmatic decisions that balance resource considerations, combined with informed predictions about the depth and complexity of literature available on their topic, with the stated objectives of their review. The remaining principles and strategies apply primarily to overviews that include the methods literature, although some aspects may be relevant to reviews that include empirical study reports.

Searching beyond standard bibliographic databases

An important reality affecting identification and selection in overviews of the methods literature is the increased likelihood for relevant publications to be located in sources other than journal articles (which is usually not the case for overviews of empirical research, where journal articles generally represent the primary publication type). In the overview on sampling [ 18 ], out of 41 full-text publications retrieved and reviewed, only 4 were journal articles, while 37 were books or book chapters. Since many books and book chapters did not exist electronically, their full text had to be physically retrieved in hardcopy, while 11 publications were retrievable only through interlibrary loan or purchase request. The tasks associated with such retrieval are substantially more time-consuming than electronic retrieval. Since a substantial proportion of methods-related guidance may be located in publication types that are less comprehensively indexed in standard bibliographic databases, identification and retrieval thus become complicated processes.

Principle #2:

Considering that important sources of methods guidance can be located in non-journal publication types (e.g., books, book chapters) that tend to be poorly indexed in standard bibliographic databases, it is important to consider alternative search methods for identifying relevant publications to be further screened for inclusion.

Strategy #2:

To identify books, book chapters, and other non-journal publication types not thoroughly indexed in standard bibliographic databases, reviewers may choose to consult one or more of the following less standard sources: Google Scholar, publisher web sites, or expert opinion.

In the case of the overview on sampling in qualitative research [ 18 ], Google Scholar had two advantages over other standard bibliographic databases: it indexes and returns records of books and book chapters likely to contain guidance on qualitative research methods topics; and it has been validated as providing higher citation counts than ISI Web of Science (a producer of numerous bibliographic databases accessible through institutional subscription) for several non-biomedical disciplines including the social sciences where qualitative research methods are prominently used [ 19 – 21 ]. While we identified numerous useful publications by consulting experts, the author publication lists generated through Google Scholar searches were uniquely useful to identify more recent editions of methods books identified by experts.

Searching without relevant metadata

Determining what publications to select for inclusion in the overview on sampling [ 18 ] could only rarely be accomplished by reviewing the publication’s metadata. This was because for the many books and other non-journal type publications we identified as possibly relevant, the potential content of interest would be located in only a subsection of the publication. In this common scenario for reviews of the methods literature (as opposed to methods overviews that include empirical study reports), reviewers will often be unable to employ standard title, abstract, and keyword database searching or screening as a means for selecting publications.

Principle #3:

Considering that the presence of information about the topic of interest may not be indicated in the metadata for books and similar publication types, it is important to consider other means of identifying potentially useful publications for further screening.

Strategy #3:

One approach to identifying potentially useful books and similar publication types is to consider what classes of such publications (e.g., all methods manuals for a certain research approach) are likely to contain relevant content, then identify, retrieve, and review the full text of corresponding publications to determine whether they contain information on the topic of interest.

In the example of the overview on sampling in qualitative research [ 18 ], the topic of interest (sampling) was one of numerous topics covered in the general qualitative research methods manuals. Consequently, examples from this class of publications first had to be identified for retrieval according to non-keyword-dependent criteria. Thus, all methods manuals within the three research traditions reviewed (grounded theory, phenomenology, and case study) that might contain discussion of sampling were sought through Google Scholar and expert opinion, their full text obtained, and hand-searched for relevant content to determine eligibility. We used tables of contents and index sections of books to aid this hand searching.

Purposefully selecting literature on conceptual grounds

A final consideration in methods overviews relates to the type of analysis used to generate the review findings. Unlike quantitative systematic reviews where reviewers aim for accurate or unbiased quantitative estimates—something that requires identifying and selecting the literature exhaustively to obtain all relevant data available (i.e., a complete sample)—in methods overviews, reviewers must describe and interpret the relevant literature in qualitative terms to achieve review objectives. In other words, the aim in methods overviews is to seek coverage of the qualitative concepts relevant to the methods topic at hand. For example, in the overview of sampling in qualitative research [ 18 ], achieving review objectives entailed providing conceptual coverage of eight sampling-related topics that emerged as key domains. The following principle recognizes that literature sampling should therefore support generating qualitative conceptual data as the input to analysis.

Principle #4:

Since the analytic findings of a systematic methods overview are generated through qualitative description and interpretation of the literature on a specified topic, selection of the literature should be guided by a purposeful strategy designed to achieve adequate conceptual coverage (i.e., representing an appropriate degree of variation in relevant ideas) of the topic according to objectives of the review.

Strategy #4:

One strategy for choosing the purposeful approach to use in selecting the literature according to the review objectives is to consider whether those objectives imply exploring concepts either at a broad overview level, in which case combining maximum variation selection with a strategy that limits yield (e.g., critical case, politically important, or sampling for influence—described below) may be appropriate; or in depth, in which case purposeful approaches aimed at revealing innovative cases will likely be necessary.

In the methods overview on sampling, the implied scope was broad since we set out to review publications on sampling across three divergent qualitative research traditions—grounded theory, phenomenology, and case study—to facilitate making informative conceptual comparisons. Such an approach would be analogous to maximum variation sampling.

At the same time, the purpose of that review was to critically interrogate the clarity, consistency, and comprehensiveness of literature from these traditions that was “most likely to have widely influenced students’ and researchers’ ideas about sampling” (p. 1774) [ 18 ]. In other words, we explicitly set out to review and critique the most established and influential (and therefore dominant) literature, since this represents a common basis of knowledge among students and researchers seeking understanding or practical guidance on sampling in qualitative research. To achieve this objective, we purposefully sampled publications according to the criterion of influence , which we operationalized as how often an author or publication has been referenced in print or informal discourse. This second sampling approach also limited the literature we needed to consider within our broad scope review to a manageable amount.

To operationalize this strategy of sampling for influence , we sought to identify both the most influential authors within a qualitative research tradition (all of whose citations were subsequently screened) and the most influential publications on the topic of interest by non-influential authors. This involved a flexible approach that combined multiple indicators of influence to avoid the dilemma that any single indicator might provide inadequate coverage. These indicators included bibliometric data (h-index for author influence [ 22 ]; number of cites for publication influence), expert opinion, and cross-references in the literature (i.e., snowball sampling). As a final selection criterion, a publication was included only if it made an original contribution in terms of novel guidance regarding sampling or a related concept; thus, purely secondary sources were excluded. Publish or Perish software (Anne-Wil Harzing; available at http://www.harzing.com/resources/publish-or-perish ) was used to generate bibliometric data via the Google Scholar database. Figure  1 illustrates how identification and selection in the methods overview on sampling was a multi-faceted and iterative process. The authors selected as influential, and the publications selected for inclusion or exclusion are listed in Additional file 1 (Matrices 1, 2a, 2b).

Literature identification and selection process used in the methods overview on sampling [ 18 ]

In summary, the strategies of seeking maximum variation and sampling for influence were employed in the sampling overview to meet the specific review objectives described. Reviewers will need to consider the full range of purposeful literature sampling approaches at their disposal in deciding what best matches the specific aims of their own reviews. Suri [ 10 ] has recently retooled Patton’s well-known typology of purposeful sampling strategies (originally intended for primary research) for application to literature synthesis, providing a useful resource in this respect.

Data abstraction

The purpose of data abstraction in rigorous literature reviews is to locate and record all data relevant to the topic of interest from the full text of included publications, making them available for subsequent analysis. Conventionally, a data abstraction form—consisting of numerous distinct conceptually defined fields to which corresponding information from the source publication is recorded—is developed and employed. There are several challenges, however, to the processes of developing the abstraction form and abstracting the data itself when conducting methods overviews, which we address here. Some of these problems and their solutions may be familiar to those who have conducted qualitative literature syntheses, which are similarly conceptual.

Iteratively defining conceptual information to abstract

In the overview on sampling [ 18 ], while we surveyed multiple sources beforehand to develop a list of concepts relevant for abstraction (e.g., purposeful sampling strategies, saturation, sample size), there was no way for us to anticipate some concepts prior to encountering them in the review process. Indeed, in many cases, reviewers are unable to determine the complete set of methods-related concepts that will be the focus of the final review a priori without having systematically reviewed the publications to be included. Thus, defining what information to abstract beforehand may not be feasible.

Principle #5:

Considering the potential impracticality of defining a complete set of relevant methods-related concepts from a body of literature one has not yet systematically read, selecting and defining fields for data abstraction must often be undertaken iteratively. Thus, concepts to be abstracted can be expected to grow and change as data abstraction proceeds.

Strategy #5:

Reviewers can develop an initial form or set of concepts for abstraction purposes according to standard methods (e.g., incorporating expert feedback, pilot testing) and remain attentive to the need to iteratively revise it as concepts are added or modified during the review. Reviewers should document revisions and return to re-abstract data from previously abstracted publications as the new data requirements are determined.

In the sampling overview [ 18 ], we developed and maintained the abstraction form in Microsoft Word. We derived the initial set of abstraction fields from our own knowledge of relevant sampling-related concepts, consultation with local experts, and reviewing a pilot sample of publications. Since the publications in this review included a large proportion of books, the abstraction process often began by flagging the broad sections within a publication containing topic-relevant information for detailed review to identify text to abstract. When reviewing flagged text, the reviewer occasionally encountered an unanticipated concept significant enough to warrant being added as a new field to the abstraction form. For example, a field was added to capture how authors described the timing of sampling decisions, whether before (a priori) or after (ongoing) starting data collection, or whether this was unclear. In these cases, we systematically documented the modification to the form and returned to previously abstracted publications to abstract any information that might be relevant to the new field.

The logic of this strategy is analogous to the logic used in a form of research synthesis called best fit framework synthesis (BFFS) [ 23 – 25 ]. In that method, reviewers initially code evidence using an a priori framework they have selected. When evidence cannot be accommodated by the selected framework, reviewers then develop new themes or concepts from which they construct a new expanded framework. Both the strategy proposed and the BFFS approach to research synthesis are notable for their rigorous and transparent means to adapt a final set of concepts to the content under review.

Accounting for inconsistent terminology

An important complication affecting the abstraction process in methods overviews is that the language used by authors to describe methods-related concepts can easily vary across publications. For example, authors from different qualitative research traditions often use different terms for similar methods-related concepts. Furthermore, as we found in the sampling overview [ 18 ], there may be cases where no identifiable term, phrase, or label for a methods-related concept is used at all, and a description of it is given instead. This can make searching the text for relevant concepts based on keywords unreliable.

Principle #6:

Since accepted terms may not be used consistently to refer to methods concepts, it is necessary to rely on the definitions for concepts, rather than keywords, to identify relevant information in the publication to abstract.

Strategy #6:

An effective means to systematically identify relevant information is to develop and iteratively adjust written definitions for key concepts (corresponding to abstraction fields) that are consistent with and as inclusive of as much of the literature reviewed as possible. Reviewers then seek information that matches these definitions (rather than keywords) when scanning a publication for relevant data to abstract.

In the abstraction process for the sampling overview [ 18 ], we noted the several concepts of interest to the review for which abstraction by keyword was particularly problematic due to inconsistent terminology across publications: sampling , purposeful sampling , sampling strategy , and saturation (for examples, see Additional file 1 , Matrices 3a, 3b, 4). We iteratively developed definitions for these concepts by abstracting text from publications that either provided an explicit definition or from which an implicit definition could be derived, which was recorded in fields dedicated to the concept’s definition. Using a method of constant comparison, we used text from definition fields to inform and modify a centrally maintained definition of the corresponding concept to optimize its fit and inclusiveness with the literature reviewed. Table  1 shows, as an example, the final definition constructed in this way for one of the central concepts of the review, qualitative sampling .

We applied iteratively developed definitions when making decisions about what specific text to abstract for an existing field, which allowed us to abstract concept-relevant data even if no recognized keyword was used. For example, this was the case for the sampling-related concept, saturation , where the relevant text available for abstraction in one publication [ 26 ]—“to continue to collect data until nothing new was being observed or recorded, no matter how long that takes”—was not accompanied by any term or label whatsoever.

This comparative analytic strategy (and our approach to analysis more broadly as described in strategy #7, below) is analogous to the process of reciprocal translation —a technique first introduced for meta-ethnography by Noblit and Hare [ 27 ] that has since been recognized as a common element in a variety of qualitative metasynthesis approaches [ 28 ]. Reciprocal translation, taken broadly, involves making sense of a study’s findings in terms of the findings of the other studies included in the review. In practice, it has been operationalized in different ways. Melendez-Torres and colleagues developed a typology from their review of the metasynthesis literature, describing four overlapping categories of specific operations undertaken in reciprocal translation: visual representation, key paper integration, data reduction and thematic extraction, and line-by-line coding [ 28 ]. The approaches suggested in both strategies #6 and #7, with their emphasis on constant comparison, appear to fall within the line-by-line coding category.

Generating credible and verifiable analytic interpretations

The analysis in a systematic methods overview must support its more general objective, which we suggested above is often to offer clarity and enhance collective understanding regarding a chosen methods topic. In our experience, this involves describing and interpreting the relevant literature in qualitative terms. Furthermore, any interpretative analysis required may entail reaching different levels of abstraction, depending on the more specific objectives of the review. For example, in the overview on sampling [ 18 ], we aimed to produce a comparative analysis of how multiple sampling-related topics were treated differently within and among different qualitative research traditions. To promote credibility of the review, however, not only should one seek a qualitative analytic approach that facilitates reaching varying levels of abstraction but that approach must also ensure that abstract interpretations are supported and justified by the source data and not solely the product of the analyst’s speculative thinking.

Principle #7:

Considering the qualitative nature of the analysis required in systematic methods overviews, it is important to select an analytic method whose interpretations can be verified as being consistent with the literature selected, regardless of the level of abstraction reached.

Strategy #7:

We suggest employing the constant comparative method of analysis [ 29 ] because it supports developing and verifying analytic links to the source data throughout progressively interpretive or abstract levels. In applying this approach, we advise a rigorous approach, documenting how supportive quotes or references to the original texts are carried forward in the successive steps of analysis to allow for easy verification.

The analytic approach used in the methods overview on sampling [ 18 ] comprised four explicit steps, progressing in level of abstraction—data abstraction, matrices, narrative summaries, and final analytic conclusions (Fig.  2 ). While we have positioned data abstraction as the second stage of the generic review process (prior to Analysis), above, we also considered it as an initial step of analysis in the sampling overview for several reasons. First, it involved a process of constant comparisons and iterative decision-making about the fields to add or define during development and modification of the abstraction form, through which we established the range of concepts to be addressed in the review. At the same time, abstraction involved continuous analytic decisions about what textual quotes (ranging in size from short phrases to numerous paragraphs) to record in the fields thus created. This constant comparative process was analogous to open coding in which textual data from publications was compared to conceptual fields (equivalent to codes) or to other instances of data previously abstracted when constructing definitions to optimize their fit with the overall literature as described in strategy #6. Finally, in the data abstraction step, we also recorded our first interpretive thoughts in dedicated fields, providing initial material for the more abstract analytic steps.

Summary of progressive steps of analysis used in the methods overview on sampling [ 18 ]

In the second step of the analysis, we constructed topic-specific matrices , or tables, by copying relevant quotes from abstraction forms into the appropriate cells of matrices (for the complete set of analytic matrices developed in the sampling review, see Additional file 1 (matrices 3 to 10)). Each matrix ranged from one to five pages; row headings, nested three-deep, identified the methodological tradition, author, and publication, respectively; and column headings identified the concepts, which corresponded to abstraction fields. Matrices thus allowed us to make further comparisons across methodological traditions, and between authors within a tradition. In the third step of analysis, we recorded our comparative observations as narrative summaries , in which we used illustrative quotes more sparingly. In the final step, we developed analytic conclusions based on the narrative summaries about the sampling-related concepts within each methodological tradition for which clarity, consistency, or comprehensiveness of the available guidance appeared to be lacking. Higher levels of analysis thus built logically from the lower levels, enabling us to easily verify analytic conclusions by tracing the support for claims by comparing the original text of publications reviewed.

Integrative versus interpretive methods overviews

The analytic product of systematic methods overviews is comparable to qualitative evidence syntheses, since both involve describing and interpreting the relevant literature in qualitative terms. Most qualitative synthesis approaches strive to produce new conceptual understandings that vary in level of interpretation. Dixon-Woods and colleagues [ 30 ] elaborate on a useful distinction, originating from Noblit and Hare [ 27 ], between integrative and interpretive reviews. Integrative reviews focus on summarizing available primary data and involve using largely secure and well defined concepts to do so; definitions are used from an early stage to specify categories for abstraction (or coding) of data, which in turn supports their aggregation; they do not seek as their primary focus to develop or specify new concepts, although they may achieve some theoretical or interpretive functions. For interpretive reviews, meanwhile, the main focus is to develop new concepts and theories that integrate them, with the implication that the concepts developed become fully defined towards the end of the analysis. These two forms are not completely distinct, and “every integrative synthesis will include elements of interpretation, and every interpretive synthesis will include elements of aggregation of data” [ 30 ].

The example methods overview on sampling [ 18 ] could be classified as predominantly integrative because its primary goal was to aggregate influential authors’ ideas on sampling-related concepts; there were also, however, elements of interpretive synthesis since it aimed to develop new ideas about where clarity in guidance on certain sampling-related topics is lacking, and definitions for some concepts were flexible and not fixed until late in the review. We suggest that most systematic methods overviews will be classifiable as predominantly integrative (aggregative). Nevertheless, more highly interpretive methods overviews are also quite possible—for example, when the review objective is to provide a highly critical analysis for the purpose of generating new methodological guidance. In such cases, reviewers may need to sample more deeply (see strategy #4), specifically by selecting empirical research reports (i.e., to go beyond dominant or influential ideas in the methods literature) that are likely to feature innovations or instructive lessons in employing a given method.

In this paper, we have outlined tentative guidance in the form of seven principles and strategies on how to conduct systematic methods overviews, a review type in which methods-relevant literature is systematically analyzed with the aim of offering clarity and enhancing collective understanding regarding a specific methods topic. Our proposals include strategies for delimiting the set of publications to consider, searching beyond standard bibliographic databases, searching without the availability of relevant metadata, selecting publications on purposeful conceptual grounds, defining concepts and other information to abstract iteratively, accounting for inconsistent terminology, and generating credible and verifiable analytic interpretations. We hope the suggestions proposed will be useful to others undertaking reviews on methods topics in future.

As far as we are aware, this is the first published source of concrete guidance for conducting this type of review. It is important to note that our primary objective was to initiate methodological discussion by stimulating reflection on what rigorous methods for this type of review should look like, leaving the development of more complete guidance to future work. While derived from the experience of reviewing a single qualitative methods topic, we believe the principles and strategies provided are generalizable to overviews of both qualitative and quantitative methods topics alike. However, it is expected that additional challenges and insights for conducting such reviews have yet to be defined. Thus, we propose that next steps for developing more definitive guidance should involve an attempt to collect and integrate other reviewers’ perspectives and experiences in conducting systematic methods overviews on a broad range of qualitative and quantitative methods topics. Formalized guidance and standards would improve the quality of future methods overviews, something we believe has important implications for advancing qualitative and quantitative methodology. When undertaken to a high standard, rigorous critical evaluations of the available methods guidance have significant potential to make implicit controversies explicit, and improve the clarity and precision of our understandings of problematic qualitative or quantitative methods issues.

A review process central to most types of rigorous reviews of empirical studies, which we did not explicitly address in a separate review step above, is quality appraisal . The reason we have not treated this as a separate step stems from the different objectives of the primary publications included in overviews of the methods literature (i.e., providing methodological guidance) compared to the primary publications included in the other established review types (i.e., reporting findings from single empirical studies). This is not to say that appraising quality of the methods literature is not an important concern for systematic methods overviews. Rather, appraisal is much more integral to (and difficult to separate from) the analysis step, in which we advocate appraising clarity, consistency, and comprehensiveness—the quality appraisal criteria that we suggest are appropriate for the methods literature. As a second important difference regarding appraisal, we currently advocate appraising the aforementioned aspects at the level of the literature in aggregate rather than at the level of individual publications. One reason for this is that methods guidance from individual publications generally builds on previous literature, and thus we feel that ahistorical judgments about comprehensiveness of single publications lack relevance and utility. Additionally, while different methods authors may express themselves less clearly than others, their guidance can nonetheless be highly influential and useful, and should therefore not be downgraded or ignored based on considerations of clarity—which raises questions about the alternative uses that quality appraisals of individual publications might have. Finally, legitimate variability in the perspectives that methods authors wish to emphasize, and the levels of generality at which they write about methods, makes critiquing individual publications based on the criterion of clarity a complex and potentially problematic endeavor that is beyond the scope of this paper to address. By appraising the current state of the literature at a holistic level, reviewers stand to identify important gaps in understanding that represent valuable opportunities for further methodological development.

To summarize, the principles and strategies provided here may be useful to those seeking to undertake their own systematic methods overview. Additional work is needed, however, to establish guidance that is comprehensive by comparing the experiences from conducting a variety of methods overviews on a range of methods topics. Efforts that further advance standards for systematic methods overviews have the potential to promote high-quality critical evaluations that produce conceptually clear and unified understandings of problematic methods topics, thereby accelerating the advance of research methodology.

Hutton JL, Ashcroft R. What does “systematic” mean for reviews of methods? In: Black N, Brazier J, Fitzpatrick R, Reeves B, editors. Health services research methods: a guide to best practice. London: BMJ Publishing Group; 1998. p. 249–54.

Google Scholar  

Cochrane handbook for systematic reviews of interventions. In. Edited by Higgins JPT, Green S, Version 5.1.0 edn: The Cochrane Collaboration; 2011.

Centre for Reviews and Dissemination: Systematic reviews: CRD’s guidance for undertaking reviews in health care . York: Centre for Reviews and Dissemination; 2009.

Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JPA, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ. 2009;339:b2700–0.

Barnett-Page E, Thomas J. Methods for the synthesis of qualitative research: a critical review. BMC Med Res Methodol. 2009;9(1):59.

Article   PubMed   PubMed Central   Google Scholar  

Kastner M, Tricco AC, Soobiah C, Lillie E, Perrier L, Horsley T, Welch V, Cogo E, Antony J, Straus SE. What is the most appropriate knowledge synthesis method to conduct a review? Protocol for a scoping review. BMC Med Res Methodol. 2012;12(1):1–1.

Article   Google Scholar  

Booth A, Noyes J, Flemming K, Gerhardus A. Guidance on choosing qualitative evidence synthesis methods for use in health technology assessments of complex interventions. In: Integrate-HTA. 2016.

Booth A, Sutton A, Papaioannou D. Systematic approaches to successful literature review. 2nd ed. London: Sage; 2016.

Hannes K, Lockwood C. Synthesizing qualitative research: choosing the right approach. Chichester: Wiley-Blackwell; 2012.

Suri H. Towards methodologically inclusive research syntheses: expanding possibilities. New York: Routledge; 2014.

Campbell M, Egan M, Lorenc T, Bond L, Popham F, Fenton C, Benzeval M. Considering methodological options for reviews of theory: illustrated by a review of theories linking income and health. Syst Rev. 2014;3(1):1–11.

Cohen DJ, Crabtree BF. Evaluative criteria for qualitative research in health care: controversies and recommendations. Ann Fam Med. 2008;6(4):331–9.

Tong A, Sainsbury P, Craig J. Consolidated criteria for reportingqualitative research (COREQ): a 32-item checklist for interviews and focus groups. Int J Qual Health Care. 2007;19(6):349–57.

Article   PubMed   Google Scholar  

Moher D, Schulz KF, Simera I, Altman DG. Guidance for developers of health research reporting guidelines. PLoS Med. 2010;7(2):e1000217.

Moher D, Tetzlaff J, Tricco AC, Sampson M, Altman DG. Epidemiology and reporting characteristics of systematic reviews. PLoS Med. 2007;4(3):e78.

Chan AW, Altman DG. Epidemiology and reporting of randomised trials published in PubMed journals. Lancet. 2005;365(9465):1159–62.

Alshurafa M, Briel M, Akl EA, Haines T, Moayyedi P, Gentles SJ, Rios L, Tran C, Bhatnagar N, Lamontagne F, et al. Inconsistent definitions for intention-to-treat in relation to missing outcome data: systematic review of the methods literature. PLoS One. 2012;7(11):e49163.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Gentles SJ, Charles C, Ploeg J, McKibbon KA. Sampling in qualitative research: insights from an overview of the methods literature. Qual Rep. 2015;20(11):1772–89.

Harzing A-W, Alakangas S. Google Scholar, Scopus and the Web of Science: a longitudinal and cross-disciplinary comparison. Scientometrics. 2016;106(2):787–804.

Harzing A-WK, van der Wal R. Google Scholar as a new source for citation analysis. Ethics Sci Environ Polit. 2008;8(1):61–73.

Kousha K, Thelwall M. Google Scholar citations and Google Web/URL citations: a multi‐discipline exploratory analysis. J Assoc Inf Sci Technol. 2007;58(7):1055–65.

Hirsch JE. An index to quantify an individual’s scientific research output. Proc Natl Acad Sci U S A. 2005;102(46):16569–72.

Booth A, Carroll C. How to build up the actionable knowledge base: the role of ‘best fit’ framework synthesis for studies of improvement in healthcare. BMJ Quality Safety. 2015;24(11):700–8.

Carroll C, Booth A, Leaviss J, Rick J. “Best fit” framework synthesis: refining the method. BMC Med Res Methodol. 2013;13(1):37.

Carroll C, Booth A, Cooper K. A worked example of “best fit” framework synthesis: a systematic review of views concerning the taking of some potential chemopreventive agents. BMC Med Res Methodol. 2011;11(1):29.

Cohen MZ, Kahn DL, Steeves DL. Hermeneutic phenomenological research: a practical guide for nurse researchers. Thousand Oaks: Sage; 2000.

Noblit GW, Hare RD. Meta-ethnography: synthesizing qualitative studies. Newbury Park: Sage; 1988.

Book   Google Scholar  

Melendez-Torres GJ, Grant S, Bonell C. A systematic review and critical appraisal of qualitative metasynthetic practice in public health to develop a taxonomy of operations of reciprocal translation. Res Synthesis Methods. 2015;6(4):357–71.

Article   CAS   Google Scholar  

Glaser BG, Strauss A. The discovery of grounded theory. Chicago: Aldine; 1967.

Dixon-Woods M, Agarwal S, Young B, Jones D, Sutton A. Integrative approaches to qualitative and quantitative evidence. In: UK National Health Service. 2004. p. 1–44.

Download references

Acknowledgements

Not applicable.

There was no funding for this work.

Availability of data and materials

The systematic methods overview used as a worked example in this article (Gentles SJ, Charles C, Ploeg J, McKibbon KA: Sampling in qualitative research: insights from an overview of the methods literature. The Qual Rep 2015, 20(11):1772-1789) is available from http://nsuworks.nova.edu/tqr/vol20/iss11/5 .

Authors’ contributions

SJG wrote the first draft of this article, with CC contributing to drafting. All authors contributed to revising the manuscript. All authors except CC (deceased) approved the final draft. SJG, CC, KAB, and JP were involved in developing methods for the systematic methods overview on sampling.

Authors’ information

Competing interests.

The authors declare that they have no competing interests.

Consent for publication

Ethics approval and consent to participate, author information, authors and affiliations.

Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada

Stephen J. Gentles, Cathy Charles & K. Ann McKibbon

Faculty of Social Work, University of Calgary, Alberta, Canada

David B. Nicholas

School of Nursing, McMaster University, Hamilton, Ontario, Canada

Jenny Ploeg

CanChild Centre for Childhood Disability Research, McMaster University, 1400 Main Street West, IAHS 408, Hamilton, ON, L8S 1C7, Canada

Stephen J. Gentles

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Stephen J. Gentles .

Additional information

Cathy Charles is deceased

Additional file

Additional file 1:.

Submitted: Analysis_matrices. (DOC 330 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Gentles, S.J., Charles, C., Nicholas, D.B. et al. Reviewing the research methods literature: principles and strategies illustrated by a systematic overview of sampling in qualitative research. Syst Rev 5 , 172 (2016). https://doi.org/10.1186/s13643-016-0343-0

Download citation

Received : 06 June 2016

Accepted : 14 September 2016

Published : 11 October 2016

DOI : https://doi.org/10.1186/s13643-016-0343-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Systematic review
  • Literature selection
  • Research methods
  • Research methodology
  • Overview of methods
  • Systematic methods overview
  • Review methods

Systematic Reviews

ISSN: 2046-4053

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

scholarly articles about research methods

REVIEW article

The use of research methods in psychological research: a systematised review.

\nSalom Elizabeth Scholtz

  • 1 Community Psychosocial Research (COMPRES), School of Psychosocial Health, North-West University, Potchefstroom, South Africa
  • 2 WorkWell Research Institute, North-West University, Potchefstroom, South Africa

Research methods play an imperative role in research quality as well as educating young researchers, however, the application thereof is unclear which can be detrimental to the field of psychology. Therefore, this systematised review aimed to determine what research methods are being used, how these methods are being used and for what topics in the field. Our review of 999 articles from five journals over a period of 5 years indicated that psychology research is conducted in 10 topics via predominantly quantitative research methods. Of these 10 topics, social psychology was the most popular. The remainder of the conducted methodology is described. It was also found that articles lacked rigour and transparency in the used methodology which has implications for replicability. In conclusion this article, provides an overview of all reported methodologies used in a sample of psychology journals. It highlights the popularity and application of methods and designs throughout the article sample as well as an unexpected lack of rigour with regard to most aspects of methodology. Possible sample bias should be considered when interpreting the results of this study. It is recommended that future research should utilise the results of this study to determine the possible impact on the field of psychology as a science and to further investigation into the use of research methods. Results should prompt the following future research into: a lack or rigour and its implication on replication, the use of certain methods above others, publication bias and choice of sampling method.

Introduction

Psychology is an ever-growing and popular field ( Gough and Lyons, 2016 ; Clay, 2017 ). Due to this growth and the need for science-based research to base health decisions on ( Perestelo-Pérez, 2013 ), the use of research methods in the broad field of psychology is an essential point of investigation ( Stangor, 2011 ; Aanstoos, 2014 ). Research methods are therefore viewed as important tools used by researchers to collect data ( Nieuwenhuis, 2016 ) and include the following: quantitative, qualitative, mixed method and multi method ( Maree, 2016 ). Additionally, researchers also employ various types of literature reviews to address research questions ( Grant and Booth, 2009 ). According to literature, what research method is used and why a certain research method is used is complex as it depends on various factors that may include paradigm ( O'Neil and Koekemoer, 2016 ), research question ( Grix, 2002 ), or the skill and exposure of the researcher ( Nind et al., 2015 ). How these research methods are employed is also difficult to discern as research methods are often depicted as having fixed boundaries that are continuously crossed in research ( Johnson et al., 2001 ; Sandelowski, 2011 ). Examples of this crossing include adding quantitative aspects to qualitative studies ( Sandelowski et al., 2009 ), or stating that a study used a mixed-method design without the study having any characteristics of this design ( Truscott et al., 2010 ).

The inappropriate use of research methods affects how students and researchers improve and utilise their research skills ( Scott Jones and Goldring, 2015 ), how theories are developed ( Ngulube, 2013 ), and the credibility of research results ( Levitt et al., 2017 ). This, in turn, can be detrimental to the field ( Nind et al., 2015 ), journal publication ( Ketchen et al., 2008 ; Ezeh et al., 2010 ), and attempts to address public social issues through psychological research ( Dweck, 2017 ). This is especially important given the now well-known replication crisis the field is facing ( Earp and Trafimow, 2015 ; Hengartner, 2018 ).

Due to this lack of clarity on method use and the potential impact of inept use of research methods, the aim of this study was to explore the use of research methods in the field of psychology through a review of journal publications. Chaichanasakul et al. (2011) identify reviewing articles as the opportunity to examine the development, growth and progress of a research area and overall quality of a journal. Studies such as Lee et al. (1999) as well as Bluhm et al. (2011) review of qualitative methods has attempted to synthesis the use of research methods and indicated the growth of qualitative research in American and European journals. Research has also focused on the use of research methods in specific sub-disciplines of psychology, for example, in the field of Industrial and Organisational psychology Coetzee and Van Zyl (2014) found that South African publications tend to consist of cross-sectional quantitative research methods with underrepresented longitudinal studies. Qualitative studies were found to make up 21% of the articles published from 1995 to 2015 in a similar study by O'Neil and Koekemoer (2016) . Other methods in health psychology, such as Mixed methods research have also been reportedly growing in popularity ( O'Cathain, 2009 ).

A broad overview of the use of research methods in the field of psychology as a whole is however, not available in the literature. Therefore, our research focused on answering what research methods are being used, how these methods are being used and for what topics in practice (i.e., journal publications) in order to provide a general perspective of method used in psychology publication. We synthesised the collected data into the following format: research topic [areas of scientific discourse in a field or the current needs of a population ( Bittermann and Fischer, 2018 )], method [data-gathering tools ( Nieuwenhuis, 2016 )], sampling [elements chosen from a population to partake in research ( Ritchie et al., 2009 )], data collection [techniques and research strategy ( Maree, 2016 )], and data analysis [discovering information by examining bodies of data ( Ktepi, 2016 )]. A systematised review of recent articles (2013 to 2017) collected from five different journals in the field of psychological research was conducted.

Grant and Booth (2009) describe systematised reviews as the review of choice for post-graduate studies, which is employed using some elements of a systematic review and seldom more than one or two databases to catalogue studies after a comprehensive literature search. The aspects used in this systematised review that are similar to that of a systematic review were a full search within the chosen database and data produced in tabular form ( Grant and Booth, 2009 ).

Sample sizes and timelines vary in systematised reviews (see Lowe and Moore, 2014 ; Pericall and Taylor, 2014 ; Barr-Walker, 2017 ). With no clear parameters identified in the literature (see Grant and Booth, 2009 ), the sample size of this study was determined by the purpose of the sample ( Strydom, 2011 ), and time and cost constraints ( Maree and Pietersen, 2016 ). Thus, a non-probability purposive sample ( Ritchie et al., 2009 ) of the top five psychology journals from 2013 to 2017 was included in this research study. Per Lee (2015) American Psychological Association (APA) recommends the use of the most up-to-date sources for data collection with consideration of the context of the research study. As this research study focused on the most recent trends in research methods used in the broad field of psychology, the identified time frame was deemed appropriate.

Psychology journals were only included if they formed part of the top five English journals in the miscellaneous psychology domain of the Scimago Journal and Country Rank ( Scimago Journal & Country Rank, 2017 ). The Scimago Journal and Country Rank provides a yearly updated list of publicly accessible journal and country-specific indicators derived from the Scopus ® database ( Scopus, 2017b ) by means of the Scimago Journal Rank (SJR) indicator developed by Scimago from the algorithm Google PageRank™ ( Scimago Journal & Country Rank, 2017 ). Scopus is the largest global database of abstracts and citations from peer-reviewed journals ( Scopus, 2017a ). Reasons for the development of the Scimago Journal and Country Rank list was to allow researchers to assess scientific domains, compare country rankings, and compare and analyse journals ( Scimago Journal & Country Rank, 2017 ), which supported the aim of this research study. Additionally, the goals of the journals had to focus on topics in psychology in general with no preference to specific research methods and have full-text access to articles.

The following list of top five journals in 2018 fell within the abovementioned inclusion criteria (1) Australian Journal of Psychology, (2) British Journal of Psychology, (3) Europe's Journal of Psychology, (4) International Journal of Psychology and lastly the (5) Journal of Psychology Applied and Interdisciplinary.

Journals were excluded from this systematised review if no full-text versions of their articles were available, if journals explicitly stated a publication preference for certain research methods, or if the journal only published articles in a specific discipline of psychological research (for example, industrial psychology, clinical psychology etc.).

The researchers followed a procedure (see Figure 1 ) adapted from that of Ferreira et al. (2016) for systematised reviews. Data collection and categorisation commenced on 4 December 2017 and continued until 30 June 2019. All the data was systematically collected and coded manually ( Grant and Booth, 2009 ) with an independent person acting as co-coder. Codes of interest included the research topic, method used, the design used, sampling method, and methodology (the method used for data collection and data analysis). These codes were derived from the wording in each article. Themes were created based on the derived codes and checked by the co-coder. Lastly, these themes were catalogued into a table as per the systematised review design.

www.frontiersin.org

Figure 1 . Systematised review procedure.

According to Johnston et al. (2019) , “literature screening, selection, and data extraction/analyses” (p. 7) are specifically tailored to the aim of a review. Therefore, the steps followed in a systematic review must be reported in a comprehensive and transparent manner. The chosen systematised design adhered to the rigour expected from systematic reviews with regard to full search and data produced in tabular form ( Grant and Booth, 2009 ). The rigorous application of the systematic review is, therefore discussed in relation to these two elements.

Firstly, to ensure a comprehensive search, this research study promoted review transparency by following a clear protocol outlined according to each review stage before collecting data ( Johnston et al., 2019 ). This protocol was similar to that of Ferreira et al. (2016) and approved by three research committees/stakeholders and the researchers ( Johnston et al., 2019 ). The eligibility criteria for article inclusion was based on the research question and clearly stated, and the process of inclusion was recorded on an electronic spreadsheet to create an evidence trail ( Bandara et al., 2015 ; Johnston et al., 2019 ). Microsoft Excel spreadsheets are a popular tool for review studies and can increase the rigour of the review process ( Bandara et al., 2015 ). Screening for appropriate articles for inclusion forms an integral part of a systematic review process ( Johnston et al., 2019 ). This step was applied to two aspects of this research study: the choice of eligible journals and articles to be included. Suitable journals were selected by the first author and reviewed by the second and third authors. Initially, all articles from the chosen journals were included. Then, by process of elimination, those irrelevant to the research aim, i.e., interview articles or discussions etc., were excluded.

To ensure rigourous data extraction, data was first extracted by one reviewer, and an independent person verified the results for completeness and accuracy ( Johnston et al., 2019 ). The research question served as a guide for efficient, organised data extraction ( Johnston et al., 2019 ). Data was categorised according to the codes of interest, along with article identifiers for audit trails such as authors, title and aims of articles. The categorised data was based on the aim of the review ( Johnston et al., 2019 ) and synthesised in tabular form under methods used, how these methods were used, and for what topics in the field of psychology.

The initial search produced a total of 1,145 articles from the 5 journals identified. Inclusion and exclusion criteria resulted in a final sample of 999 articles ( Figure 2 ). Articles were co-coded into 84 codes, from which 10 themes were derived ( Table 1 ).

www.frontiersin.org

Figure 2 . Journal article frequency.

www.frontiersin.org

Table 1 . Codes used to form themes (research topics).

These 10 themes represent the topic section of our research question ( Figure 3 ). All these topics except, for the final one, psychological practice , were found to concur with the research areas in psychology as identified by Weiten (2010) . These research areas were chosen to represent the derived codes as they provided broad definitions that allowed for clear, concise categorisation of the vast amount of data. Article codes were categorised under particular themes/topics if they adhered to the research area definitions created by Weiten (2010) . It is important to note that these areas of research do not refer to specific disciplines in psychology, such as industrial psychology; but to broader fields that may encompass sub-interests of these disciplines.

www.frontiersin.org

Figure 3 . Topic frequency (international sample).

In the case of developmental psychology , researchers conduct research into human development from childhood to old age. Social psychology includes research on behaviour governed by social drivers. Researchers in the field of educational psychology study how people learn and the best way to teach them. Health psychology aims to determine the effect of psychological factors on physiological health. Physiological psychology , on the other hand, looks at the influence of physiological aspects on behaviour. Experimental psychology is not the only theme that uses experimental research and focuses on the traditional core topics of psychology (for example, sensation). Cognitive psychology studies the higher mental processes. Psychometrics is concerned with measuring capacity or behaviour. Personality research aims to assess and describe consistency in human behaviour ( Weiten, 2010 ). The final theme of psychological practice refers to the experiences, techniques, and interventions employed by practitioners, researchers, and academia in the field of psychology.

Articles under these themes were further subdivided into methodologies: method, sampling, design, data collection, and data analysis. The categorisation was based on information stated in the articles and not inferred by the researchers. Data were compiled into two sets of results presented in this article. The first set addresses the aim of this study from the perspective of the topics identified. The second set of results represents a broad overview of the results from the perspective of the methodology employed. The second set of results are discussed in this article, while the first set is presented in table format. The discussion thus provides a broad overview of methods use in psychology (across all themes), while the table format provides readers with in-depth insight into methods used in the individual themes identified. We believe that presenting the data from both perspectives allow readers a broad understanding of the results. Due a large amount of information that made up our results, we followed Cichocka and Jost (2014) in simplifying our results. Please note that the numbers indicated in the table in terms of methodology differ from the total number of articles. Some articles employed more than one method/sampling technique/design/data collection method/data analysis in their studies.

What follows is the results for what methods are used, how these methods are used, and which topics in psychology they are applied to . Percentages are reported to the second decimal in order to highlight small differences in the occurrence of methodology.

Firstly, with regard to the research methods used, our results show that researchers are more likely to use quantitative research methods (90.22%) compared to all other research methods. Qualitative research was the second most common research method but only made up about 4.79% of the general method usage. Reviews occurred almost as much as qualitative studies (3.91%), as the third most popular method. Mixed-methods research studies (0.98%) occurred across most themes, whereas multi-method research was indicated in only one study and amounted to 0.10% of the methods identified. The specific use of each method in the topics identified is shown in Table 2 and Figure 4 .

www.frontiersin.org

Table 2 . Research methods in psychology.

www.frontiersin.org

Figure 4 . Research method frequency in topics.

Secondly, in the case of how these research methods are employed , our study indicated the following.

Sampling −78.34% of the studies in the collected articles did not specify a sampling method. From the remainder of the studies, 13 types of sampling methods were identified. These sampling methods included broad categorisation of a sample as, for example, a probability or non-probability sample. General samples of convenience were the methods most likely to be applied (10.34%), followed by random sampling (3.51%), snowball sampling (2.73%), and purposive (1.37%) and cluster sampling (1.27%). The remainder of the sampling methods occurred to a more limited extent (0–1.0%). See Table 3 and Figure 5 for sampling methods employed in each topic.

www.frontiersin.org

Table 3 . Sampling use in the field of psychology.

www.frontiersin.org

Figure 5 . Sampling method frequency in topics.

Designs were categorised based on the articles' statement thereof. Therefore, it is important to note that, in the case of quantitative studies, non-experimental designs (25.55%) were often indicated due to a lack of experiments and any other indication of design, which, according to Laher (2016) , is a reasonable categorisation. Non-experimental designs should thus be compared with experimental designs only in the description of data, as it could include the use of correlational/cross-sectional designs, which were not overtly stated by the authors. For the remainder of the research methods, “not stated” (7.12%) was assigned to articles without design types indicated.

From the 36 identified designs the most popular designs were cross-sectional (23.17%) and experimental (25.64%), which concurred with the high number of quantitative studies. Longitudinal studies (3.80%), the third most popular design, was used in both quantitative and qualitative studies. Qualitative designs consisted of ethnography (0.38%), interpretative phenomenological designs/phenomenology (0.28%), as well as narrative designs (0.28%). Studies that employed the review method were mostly categorised as “not stated,” with the most often stated review designs being systematic reviews (0.57%). The few mixed method studies employed exploratory, explanatory (0.09%), and concurrent designs (0.19%), with some studies referring to separate designs for the qualitative and quantitative methods. The one study that identified itself as a multi-method study used a longitudinal design. Please see how these designs were employed in each specific topic in Table 4 , Figure 6 .

www.frontiersin.org

Table 4 . Design use in the field of psychology.

www.frontiersin.org

Figure 6 . Design frequency in topics.

Data collection and analysis —data collection included 30 methods, with the data collection method most often employed being questionnaires (57.84%). The experimental task (16.56%) was the second most preferred collection method, which included established or unique tasks designed by the researchers. Cognitive ability tests (6.84%) were also regularly used along with various forms of interviewing (7.66%). Table 5 and Figure 7 represent data collection use in the various topics. Data analysis consisted of 3,857 occurrences of data analysis categorised into ±188 various data analysis techniques shown in Table 6 and Figures 1 – 7 . Descriptive statistics were the most commonly used (23.49%) along with correlational analysis (17.19%). When using a qualitative method, researchers generally employed thematic analysis (0.52%) or different forms of analysis that led to coding and the creation of themes. Review studies presented few data analysis methods, with most studies categorising their results. Mixed method and multi-method studies followed the analysis methods identified for the qualitative and quantitative studies included.

www.frontiersin.org

Table 5 . Data collection in the field of psychology.

www.frontiersin.org

Figure 7 . Data collection frequency in topics.

www.frontiersin.org

Table 6 . Data analysis in the field of psychology.

Results of the topics researched in psychology can be seen in the tables, as previously stated in this article. It is noteworthy that, of the 10 topics, social psychology accounted for 43.54% of the studies, with cognitive psychology the second most popular research topic at 16.92%. The remainder of the topics only occurred in 4.0–7.0% of the articles considered. A list of the included 999 articles is available under the section “View Articles” on the following website: https://methodgarden.xtrapolate.io/ . This website was created by Scholtz et al. (2019) to visually present a research framework based on this Article's results.

This systematised review categorised full-length articles from five international journals across the span of 5 years to provide insight into the use of research methods in the field of psychology. Results indicated what methods are used how these methods are being used and for what topics (why) in the included sample of articles. The results should be seen as providing insight into method use and by no means a comprehensive representation of the aforementioned aim due to the limited sample. To our knowledge, this is the first research study to address this topic in this manner. Our discussion attempts to promote a productive way forward in terms of the key results for method use in psychology, especially in the field of academia ( Holloway, 2008 ).

With regard to the methods used, our data stayed true to literature, finding only common research methods ( Grant and Booth, 2009 ; Maree, 2016 ) that varied in the degree to which they were employed. Quantitative research was found to be the most popular method, as indicated by literature ( Breen and Darlaston-Jones, 2010 ; Counsell and Harlow, 2017 ) and previous studies in specific areas of psychology (see Coetzee and Van Zyl, 2014 ). Its long history as the first research method ( Leech et al., 2007 ) in the field of psychology as well as researchers' current application of mathematical approaches in their studies ( Toomela, 2010 ) might contribute to its popularity today. Whatever the case may be, our results show that, despite the growth in qualitative research ( Demuth, 2015 ; Smith and McGannon, 2018 ), quantitative research remains the first choice for article publication in these journals. Despite the included journals indicating openness to articles that apply any research methods. This finding may be due to qualitative research still being seen as a new method ( Burman and Whelan, 2011 ) or reviewers' standards being higher for qualitative studies ( Bluhm et al., 2011 ). Future research is encouraged into the possible biasness in publication of research methods, additionally further investigation with a different sample into the proclaimed growth of qualitative research may also provide different results.

Review studies were found to surpass that of multi-method and mixed method studies. To this effect Grant and Booth (2009) , state that the increased awareness, journal contribution calls as well as its efficiency in procuring research funds all promote the popularity of reviews. The low frequency of mixed method studies contradicts the view in literature that it's the third most utilised research method ( Tashakkori and Teddlie's, 2003 ). Its' low occurrence in this sample could be due to opposing views on mixing methods ( Gunasekare, 2015 ) or that authors prefer publishing in mixed method journals, when using this method, or its relative novelty ( Ivankova et al., 2016 ). Despite its low occurrence, the application of the mixed methods design in articles was methodologically clear in all cases which were not the case for the remainder of research methods.

Additionally, a substantial number of studies used a combination of methodologies that are not mixed or multi-method studies. Perceived fixed boundaries are according to literature often set aside, as confirmed by this result, in order to investigate the aim of a study, which could create a new and helpful way of understanding the world ( Gunasekare, 2015 ). According to Toomela (2010) , this is not unheard of and could be considered a form of “structural systemic science,” as in the case of qualitative methodology (observation) applied in quantitative studies (experimental design) for example. Based on this result, further research into this phenomenon as well as its implications for research methods such as multi and mixed methods is recommended.

Discerning how these research methods were applied, presented some difficulty. In the case of sampling, most studies—regardless of method—did mention some form of inclusion and exclusion criteria, but no definite sampling method. This result, along with the fact that samples often consisted of students from the researchers' own academic institutions, can contribute to literature and debates among academics ( Peterson and Merunka, 2014 ; Laher, 2016 ). Samples of convenience and students as participants especially raise questions about the generalisability and applicability of results ( Peterson and Merunka, 2014 ). This is because attention to sampling is important as inappropriate sampling can debilitate the legitimacy of interpretations ( Onwuegbuzie and Collins, 2017 ). Future investigation into the possible implications of this reported popular use of convenience samples for the field of psychology as well as the reason for this use could provide interesting insight, and is encouraged by this study.

Additionally, and this is indicated in Table 6 , articles seldom report the research designs used, which highlights the pressing aspect of the lack of rigour in the included sample. Rigour with regards to the applied empirical method is imperative in promoting psychology as a science ( American Psychological Association, 2020 ). Omitting parts of the research process in publication when it could have been used to inform others' research skills should be questioned, and the influence on the process of replicating results should be considered. Publications are often rejected due to a lack of rigour in the applied method and designs ( Fonseca, 2013 ; Laher, 2016 ), calling for increased clarity and knowledge of method application. Replication is a critical part of any field of scientific research and requires the “complete articulation” of the study methods used ( Drotar, 2010 , p. 804). The lack of thorough description could be explained by the requirements of certain journals to only report on certain aspects of a research process, especially with regard to the applied design (Laher, 20). However, naming aspects such as sampling and designs, is a requirement according to the APA's Journal Article Reporting Standards (JARS-Quant) ( Appelbaum et al., 2018 ). With very little information on how a study was conducted, authors lose a valuable opportunity to enhance research validity, enrich the knowledge of others, and contribute to the growth of psychology and methodology as a whole. In the case of this research study, it also restricted our results to only reported samples and designs, which indicated a preference for certain designs, such as cross-sectional designs for quantitative studies.

Data collection and analysis were for the most part clearly stated. A key result was the versatile use of questionnaires. Researchers would apply a questionnaire in various ways, for example in questionnaire interviews, online surveys, and written questionnaires across most research methods. This may highlight a trend for future research.

With regard to the topics these methods were employed for, our research study found a new field named “psychological practice.” This result may show the growing consciousness of researchers as part of the research process ( Denzin and Lincoln, 2003 ), psychological practice, and knowledge generation. The most popular of these topics was social psychology, which is generously covered in journals and by learning societies, as testaments of the institutional support and richness social psychology has in the field of psychology ( Chryssochoou, 2015 ). The APA's perspective on 2018 trends in psychology also identifies an increased amount of psychology focus on how social determinants are influencing people's health ( Deangelis, 2017 ).

This study was not without limitations and the following should be taken into account. Firstly, this study used a sample of five specific journals to address the aim of the research study, despite general journal aims (as stated on journal websites), this inclusion signified a bias towards the research methods published in these specific journals only and limited generalisability. A broader sample of journals over a different period of time, or a single journal over a longer period of time might provide different results. A second limitation is the use of Excel spreadsheets and an electronic system to log articles, which was a manual process and therefore left room for error ( Bandara et al., 2015 ). To address this potential issue, co-coding was performed to reduce error. Lastly, this article categorised data based on the information presented in the article sample; there was no interpretation of what methodology could have been applied or whether the methods stated adhered to the criteria for the methods used. Thus, a large number of articles that did not clearly indicate a research method or design could influence the results of this review. However, this in itself was also a noteworthy result. Future research could review research methods of a broader sample of journals with an interpretive review tool that increases rigour. Additionally, the authors also encourage the future use of systematised review designs as a way to promote a concise procedure in applying this design.

Our research study presented the use of research methods for published articles in the field of psychology as well as recommendations for future research based on these results. Insight into the complex questions identified in literature, regarding what methods are used how these methods are being used and for what topics (why) was gained. This sample preferred quantitative methods, used convenience sampling and presented a lack of rigorous accounts for the remaining methodologies. All methodologies that were clearly indicated in the sample were tabulated to allow researchers insight into the general use of methods and not only the most frequently used methods. The lack of rigorous account of research methods in articles was represented in-depth for each step in the research process and can be of vital importance to address the current replication crisis within the field of psychology. Recommendations for future research aimed to motivate research into the practical implications of the results for psychology, for example, publication bias and the use of convenience samples.

Ethics Statement

This study was cleared by the North-West University Health Research Ethics Committee: NWU-00115-17-S1.

Author Contributions

All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Aanstoos, C. M. (2014). Psychology . Available online at: http://eds.a.ebscohost.com.nwulib.nwu.ac.za/eds/detail/detail?sid=18de6c5c-2b03-4eac-94890145eb01bc70%40sessionmgr4006&vid$=$1&hid$=$4113&bdata$=$JnNpdGU9ZWRzL~WxpdmU%3d#AN$=$93871882&db$=$ers

Google Scholar

American Psychological Association (2020). Science of Psychology . Available online at: https://www.apa.org/action/science/

Appelbaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., and Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: the APA Publications and Communications Board task force report. Am. Psychol. 73:3. doi: 10.1037/amp0000191

PubMed Abstract | CrossRef Full Text | Google Scholar

Bandara, W., Furtmueller, E., Gorbacheva, E., Miskon, S., and Beekhuyzen, J. (2015). Achieving rigor in literature reviews: insights from qualitative data analysis and tool-support. Commun. Ass. Inform. Syst. 37, 154–204. doi: 10.17705/1CAIS.03708

CrossRef Full Text | Google Scholar

Barr-Walker, J. (2017). Evidence-based information needs of public health workers: a systematized review. J. Med. Libr. Assoc. 105, 69–79. doi: 10.5195/JMLA.2017.109

Bittermann, A., and Fischer, A. (2018). How to identify hot topics in psychology using topic modeling. Z. Psychol. 226, 3–13. doi: 10.1027/2151-2604/a000318

Bluhm, D. J., Harman, W., Lee, T. W., and Mitchell, T. R. (2011). Qualitative research in management: a decade of progress. J. Manage. Stud. 48, 1866–1891. doi: 10.1111/j.1467-6486.2010.00972.x

Breen, L. J., and Darlaston-Jones, D. (2010). Moving beyond the enduring dominance of positivism in psychological research: implications for psychology in Australia. Aust. Psychol. 45, 67–76. doi: 10.1080/00050060903127481

Burman, E., and Whelan, P. (2011). Problems in / of Qualitative Research . Maidenhead: Open University Press/McGraw Hill.

Chaichanasakul, A., He, Y., Chen, H., Allen, G. E. K., Khairallah, T. S., and Ramos, K. (2011). Journal of Career Development: a 36-year content analysis (1972–2007). J. Career. Dev. 38, 440–455. doi: 10.1177/0894845310380223

Chryssochoou, X. (2015). Social Psychology. Inter. Encycl. Soc. Behav. Sci. 22, 532–537. doi: 10.1016/B978-0-08-097086-8.24095-6

Cichocka, A., and Jost, J. T. (2014). Stripped of illusions? Exploring system justification processes in capitalist and post-Communist societies. Inter. J. Psychol. 49, 6–29. doi: 10.1002/ijop.12011

Clay, R. A. (2017). Psychology is More Popular Than Ever. Monitor on Psychology: Trends Report . Available online at: https://www.apa.org/monitor/2017/11/trends-popular

Coetzee, M., and Van Zyl, L. E. (2014). A review of a decade's scholarly publications (2004–2013) in the South African Journal of Industrial Psychology. SA. J. Psychol . 40, 1–16. doi: 10.4102/sajip.v40i1.1227

Counsell, A., and Harlow, L. (2017). Reporting practices and use of quantitative methods in Canadian journal articles in psychology. Can. Psychol. 58, 140–147. doi: 10.1037/cap0000074

Deangelis, T. (2017). Targeting Social Factors That Undermine Health. Monitor on Psychology: Trends Report . Available online at: https://www.apa.org/monitor/2017/11/trend-social-factors

Demuth, C. (2015). New directions in qualitative research in psychology. Integr. Psychol. Behav. Sci. 49, 125–133. doi: 10.1007/s12124-015-9303-9

Denzin, N. K., and Lincoln, Y. (2003). The Landscape of Qualitative Research: Theories and Issues , 2nd Edn. London: Sage.

Drotar, D. (2010). A call for replications of research in pediatric psychology and guidance for authors. J. Pediatr. Psychol. 35, 801–805. doi: 10.1093/jpepsy/jsq049

Dweck, C. S. (2017). Is psychology headed in the right direction? Yes, no, and maybe. Perspect. Psychol. Sci. 12, 656–659. doi: 10.1177/1745691616687747

Earp, B. D., and Trafimow, D. (2015). Replication, falsification, and the crisis of confidence in social psychology. Front. Psychol. 6:621. doi: 10.3389/fpsyg.2015.00621

Ezeh, A. C., Izugbara, C. O., Kabiru, C. W., Fonn, S., Kahn, K., Manderson, L., et al. (2010). Building capacity for public and population health research in Africa: the consortium for advanced research training in Africa (CARTA) model. Glob. Health Action 3:5693. doi: 10.3402/gha.v3i0.5693

Ferreira, A. L. L., Bessa, M. M. M., Drezett, J., and De Abreu, L. C. (2016). Quality of life of the woman carrier of endometriosis: systematized review. Reprod. Clim. 31, 48–54. doi: 10.1016/j.recli.2015.12.002

Fonseca, M. (2013). Most Common Reasons for Journal Rejections . Available online at: http://www.editage.com/insights/most-common-reasons-for-journal-rejections

Gough, B., and Lyons, A. (2016). The future of qualitative research in psychology: accentuating the positive. Integr. Psychol. Behav. Sci. 50, 234–243. doi: 10.1007/s12124-015-9320-8

Grant, M. J., and Booth, A. (2009). A typology of reviews: an analysis of 14 review types and associated methodologies. Health Info. Libr. J. 26, 91–108. doi: 10.1111/j.1471-1842.2009.00848.x

Grix, J. (2002). Introducing students to the generic terminology of social research. Politics 22, 175–186. doi: 10.1111/1467-9256.00173

Gunasekare, U. L. T. P. (2015). Mixed research method as the third research paradigm: a literature review. Int. J. Sci. Res. 4, 361–368. Available online at: https://ssrn.com/abstract=2735996

Hengartner, M. P. (2018). Raising awareness for the replication crisis in clinical psychology by focusing on inconsistencies in psychotherapy Research: how much can we rely on published findings from efficacy trials? Front. Psychol. 9:256. doi: 10.3389/fpsyg.2018.00256

Holloway, W. (2008). Doing intellectual disagreement differently. Psychoanal. Cult. Soc. 13, 385–396. doi: 10.1057/pcs.2008.29

Ivankova, N. V., Creswell, J. W., and Plano Clark, V. L. (2016). “Foundations and Approaches to mixed methods research,” in First Steps in Research , 2nd Edn. K. Maree (Pretoria: Van Schaick Publishers), 306–335.

Johnson, M., Long, T., and White, A. (2001). Arguments for British pluralism in qualitative health research. J. Adv. Nurs. 33, 243–249. doi: 10.1046/j.1365-2648.2001.01659.x

Johnston, A., Kelly, S. E., Hsieh, S. C., Skidmore, B., and Wells, G. A. (2019). Systematic reviews of clinical practice guidelines: a methodological guide. J. Clin. Epidemiol. 108, 64–72. doi: 10.1016/j.jclinepi.2018.11.030

Ketchen, D. J. Jr., Boyd, B. K., and Bergh, D. D. (2008). Research methodology in strategic management: past accomplishments and future challenges. Organ. Res. Methods 11, 643–658. doi: 10.1177/1094428108319843

Ktepi, B. (2016). Data Analytics (DA) . Available online at: https://eds-b-ebscohost-com.nwulib.nwu.ac.za/eds/detail/detail?vid=2&sid=24c978f0-6685-4ed8-ad85-fa5bb04669b9%40sessionmgr101&bdata=JnNpdGU9ZWRzLWxpdmU%3d#AN=113931286&db=ers

Laher, S. (2016). Ostinato rigore: establishing methodological rigour in quantitative research. S. Afr. J. Psychol. 46, 316–327. doi: 10.1177/0081246316649121

Lee, C. (2015). The Myth of the Off-Limits Source . Available online at: http://blog.apastyle.org/apastyle/research/

Lee, T. W., Mitchell, T. R., and Sablynski, C. J. (1999). Qualitative research in organizational and vocational psychology, 1979–1999. J. Vocat. Behav. 55, 161–187. doi: 10.1006/jvbe.1999.1707

Leech, N. L., Anthony, J., and Onwuegbuzie, A. J. (2007). A typology of mixed methods research designs. Sci. Bus. Media B. V Qual. Quant 43, 265–275. doi: 10.1007/s11135-007-9105-3

Levitt, H. M., Motulsky, S. L., Wertz, F. J., Morrow, S. L., and Ponterotto, J. G. (2017). Recommendations for designing and reviewing qualitative research in psychology: promoting methodological integrity. Qual. Psychol. 4, 2–22. doi: 10.1037/qup0000082

Lowe, S. M., and Moore, S. (2014). Social networks and female reproductive choices in the developing world: a systematized review. Rep. Health 11:85. doi: 10.1186/1742-4755-11-85

Maree, K. (2016). “Planning a research proposal,” in First Steps in Research , 2nd Edn, ed K. Maree (Pretoria: Van Schaik Publishers), 49–70.

Maree, K., and Pietersen, J. (2016). “Sampling,” in First Steps in Research, 2nd Edn , ed K. Maree (Pretoria: Van Schaik Publishers), 191–202.

Ngulube, P. (2013). Blending qualitative and quantitative research methods in library and information science in sub-Saharan Africa. ESARBICA J. 32, 10–23. Available online at: http://hdl.handle.net/10500/22397 .

Nieuwenhuis, J. (2016). “Qualitative research designs and data-gathering techniques,” in First Steps in Research , 2nd Edn, ed K. Maree (Pretoria: Van Schaik Publishers), 71–102.

Nind, M., Kilburn, D., and Wiles, R. (2015). Using video and dialogue to generate pedagogic knowledge: teachers, learners and researchers reflecting together on the pedagogy of social research methods. Int. J. Soc. Res. Methodol. 18, 561–576. doi: 10.1080/13645579.2015.1062628

O'Cathain, A. (2009). Editorial: mixed methods research in the health sciences—a quiet revolution. J. Mix. Methods 3, 1–6. doi: 10.1177/1558689808326272

O'Neil, S., and Koekemoer, E. (2016). Two decades of qualitative research in psychology, industrial and organisational psychology and human resource management within South Africa: a critical review. SA J. Indust. Psychol. 42, 1–16. doi: 10.4102/sajip.v42i1.1350

Onwuegbuzie, A. J., and Collins, K. M. (2017). The role of sampling in mixed methods research enhancing inference quality. Köln Z Soziol. 2, 133–156. doi: 10.1007/s11577-017-0455-0

Perestelo-Pérez, L. (2013). Standards on how to develop and report systematic reviews in psychology and health. Int. J. Clin. Health Psychol. 13, 49–57. doi: 10.1016/S1697-2600(13)70007-3

Pericall, L. M. T., and Taylor, E. (2014). Family function and its relationship to injury severity and psychiatric outcome in children with acquired brain injury: a systematized review. Dev. Med. Child Neurol. 56, 19–30. doi: 10.1111/dmcn.12237

Peterson, R. A., and Merunka, D. R. (2014). Convenience samples of college students and research reproducibility. J. Bus. Res. 67, 1035–1041. doi: 10.1016/j.jbusres.2013.08.010

Ritchie, J., Lewis, J., and Elam, G. (2009). “Designing and selecting samples,” in Qualitative Research Practice: A Guide for Social Science Students and Researchers , 2nd Edn, ed J. Ritchie and J. Lewis (London: Sage), 1–23.

Sandelowski, M. (2011). When a cigar is not just a cigar: alternative perspectives on data and data analysis. Res. Nurs. Health 34, 342–352. doi: 10.1002/nur.20437

Sandelowski, M., Voils, C. I., and Knafl, G. (2009). On quantitizing. J. Mix. Methods Res. 3, 208–222. doi: 10.1177/1558689809334210

Scholtz, S. E., De Klerk, W., and De Beer, L. T. (2019). A data generated research framework for conducting research methods in psychological research.

Scimago Journal & Country Rank (2017). Available online at: http://www.scimagojr.com/journalrank.php?category=3201&year=2015

Scopus (2017a). About Scopus . Available online at: https://www.scopus.com/home.uri (accessed February 01, 2017).

Scopus (2017b). Document Search . Available online at: https://www.scopus.com/home.uri (accessed February 01, 2017).

Scott Jones, J., and Goldring, J. E. (2015). ‘I' m not a quants person'; key strategies in building competence and confidence in staff who teach quantitative research methods. Int. J. Soc. Res. Methodol. 18, 479–494. doi: 10.1080/13645579.2015.1062623

Smith, B., and McGannon, K. R. (2018). Developing rigor in quantitative research: problems and opportunities within sport and exercise psychology. Int. Rev. Sport Exerc. Psychol. 11, 101–121. doi: 10.1080/1750984X.2017.1317357

Stangor, C. (2011). Introduction to Psychology . Available online at: http://www.saylor.org/books/

Strydom, H. (2011). “Sampling in the quantitative paradigm,” in Research at Grass Roots; For the Social Sciences and Human Service Professions , 4th Edn, eds A. S. de Vos, H. Strydom, C. B. Fouché, and C. S. L. Delport (Pretoria: Van Schaik Publishers), 221–234.

Tashakkori, A., and Teddlie, C. (2003). Handbook of Mixed Methods in Social & Behavioural Research . Thousand Oaks, CA: SAGE publications.

Toomela, A. (2010). Quantitative methods in psychology: inevitable and useless. Front. Psychol. 1:29. doi: 10.3389/fpsyg.2010.00029

Truscott, D. M., Swars, S., Smith, S., Thornton-Reid, F., Zhao, Y., Dooley, C., et al. (2010). A cross-disciplinary examination of the prevalence of mixed methods in educational research: 1995–2005. Int. J. Soc. Res. Methodol. 13, 317–328. doi: 10.1080/13645570903097950

Weiten, W. (2010). Psychology Themes and Variations . Belmont, CA: Wadsworth.

Keywords: research methods, research approach, research trends, psychological research, systematised review, research designs, research topic

Citation: Scholtz SE, de Klerk W and de Beer LT (2020) The Use of Research Methods in Psychological Research: A Systematised Review. Front. Res. Metr. Anal. 5:1. doi: 10.3389/frma.2020.00001

Received: 30 December 2019; Accepted: 28 February 2020; Published: 20 March 2020.

Reviewed by:

Copyright © 2020 Scholtz, de Klerk and de Beer. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Salomé Elizabeth Scholtz, MjIzMDg1NjMmI3gwMDA0MDtud3UuYWMuemE=

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

IMAGES

  1. Scholarly Sources: The A-Z Guide

    scholarly articles about research methods

  2. 15 Types of Research Methods (2024)

    scholarly articles about research methods

  3. Scientific Research Methods

    scholarly articles about research methods

  4. Structure of a scholarly article

    scholarly articles about research methods

  5. What is Research: Definition, Methods, Types

    scholarly articles about research methods

  6. How Is Research Methodology Different From Research Methods?

    scholarly articles about research methods

COMMENTS

  1. 'Qualitative' and 'quantitative' methods and approaches ...

    Sep 30, 2023 · There is considerable literature showing the complexity, connectivity and blurring of 'qualitative' and 'quantitative' methods in research. Yet these concepts are often represented in a binary way as independent dichotomous categories. This is evident in many key textbooks which are used in research methods courses to guide students and newer researchers in their research training. This paper ...

  2. Clarification of research design, research methods, and ...

    May 24, 2018 · Although the existence of multiple approaches is a powerful source in the development of a research design, new public administration (PA) researchers and students may see it as a source of confusion because there is a lack of clarity in the literature about the approaches to research design, research methods, and research methodology in the ...

  3. How to use and assess qualitative research methods - PMC

    Abstract. This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions, and focussing on intervention improvement.

  4. Understanding and Evaluating Survey Research - PMC

    DATA COLLECTION METHODS. Survey research may use a variety of data collection methods with the most common being questionnaires and interviews. Questionnaires may be self-administered or administered by a professional, may be administered individually or in a group, and typically include a series of items reflecting the research aims.

  5. Reviewing the research methods literature: principles and ...

    Oct 11, 2016 · Background Overviews of methods are potentially useful means to increase clarity and enhance collective understanding of specific methods topics that may be characterized by ambiguity, inconsistency, or a lack of comprehensiveness. This type of review represents a distinct literature synthesis method, although to date, its methodology remains relatively undeveloped despite several aspects that ...

  6. (PDF) Research Methods: Issues and Research Direction

    Sep 30, 2020 · This paper proposes a research plan to investigate the research methods issues (i.e. research design, sampling methods, data collection methods, data analysis techniques, measurement scales, and ...

  7. Exploring Research Methodology: Review Article - IJRR) Journal

    Research Methodology is science of studying how research is done scientifically. A way to systematically solve the research problem by logically adopting various steps. Methodology helps to understand not only the products of scientific inquiry but the process itself. Research Methodology aims to describe and analyze methods, throw

  8. Mixed-Methods Research: A Discussion on its Types, Challenges ...

    A description of mixed methods as a research design is presented below. 3. Mixed Methods as a Research Methodology A mixed-methods approach is a research methodology in its own right. As stated by Creswell and Plano Clark (2011), a mixed-methods research design is a research design that has its own philosophical assumptions and methods of inquiry.

  9. The Use of Research Methods in Psychological Research: A ...

    Mar 19, 2020 · Firstly, with regard to the research methods used, our results show that researchers are more likely to use quantitative research methods (90.22%) compared to all other research methods. Qualitative research was the second most common research method but only made up about 4.79% of the general method usage.

  10. Research Methods & Evaluation - SAGE Journals

    Jun 23, 2023 · Sage Research Methods & Evaluation is at the forefront of research and scholarship, providing classic and cutting-edge books, video collections, reference materials, cases built on real research, key journals, and our online platform for the community Methodspace. Download special issues, collections, and a selection of most read articles.