FAQs

Got questions about TA? We have prepared some answers to some of the common ones we receive.

Reflexive TA | the basics

What’s the difference between a code and a theme?

A theme captures a common, recurring pattern across a dataset, clustered around a central organising concept. A theme tends to describe the different facets of that singular idea, demonstrating the patterning of the theme in the dataset.

Codes tend to be more specific than themes. They capture a single idea associated with a segment of data, and are identified with a pithy coding label that captures and evokes what is of analytic interest in the data (in relation to the research question). Codes can be conceptualised as the building-blocks that combine to create themes – so multiple codes typically are combined to create themes, during the process of TA.

The distinction isn’t absolute however – particularly ‘meaty’ and complex codes might be promoted to an initial theme and form the basis of subsequent theme development.

What’s the difference between a subtheme and a theme?

A theme captures a common, recurring pattern across a dataset, organised around a central organising concept. A theme tends to describe the different facets of a pattern across the dataset. A subtheme exists ‘underneath’ the umbrella of a theme. It shares the same central organising concept as the theme, but focuses on one notable specific element. Subthemes generally should be used sparingly, only when there is one particular element of a theme that has a particular focus, is notable, and/or is particularly important for the research question. Through naming and analysing a specific subtheme, that aspect of the theme becomes particularly salient.

What’s the difference between a topic summary and a theme?

The difference between a theme and a topic or domain summary is a source of frequent confusion in much published TA research. A topic summary is a summary of an area (topic) of the data; for example, a summary of everything the participants said in relation to a particular topic or interview question. Unlike themes, there isn’t anything that unifies the description of what participants said about this topic – there is no underlying concept that ties everything together and organises the analytic observations. In our reflexive approach to TA, themes are conceptualised as patterns in the data underpinned by a central concept that organises the analytic observations; this is rather different from a topic summary, and the two should ideally not be confused when using our approach. More simply put, a theme captures an aspect of patterned meaning in the data and tells the reader something about the shared meaning within it, whereas a topic summary simply summarises participant’s responses relating to a particular topic (so shared topic but not shared meaning). To make things complicated, some approaches to TA do conceptualise themes as topic summaries (but rarely do so deliberately and knowingly – with an awareness that there are other conceptualisations of themes) – this conceptualisation of themes is evident in both coding reliability approaches (see Boyatzis, 1998; Guest et al., 2012) and codebook approaches, such as template, framework and matrix analysis. Sometimes the confusion between topic summaries and themes is simply an issue of poorly named themes – the theme itself is a conceptually founded pattern, but the theme name does not reflect this. ‘Experiences of Y’ or ‘Benefits of X’ are classic examples of topic-summary type theme names. These theme names identify that, for example, ‘benefits of X’ was an important area of the data in relation to the research question(s), but they don’t communicate the essence of this theme; they don’t tell the reader something specific about these benefits and what underlying concept underpinned what the participants had to say about the benefits of X.

Literature

To understand more about the differences between topic summaries and fully realised themes, we recommend the following three papers:

Connelly, L. M., & Peltzer, J. N. (2016). Underdeveloped themes in qualitative research: Relationships with interviews and analysis. Clinical Nurse Specialist, January/February, 51-57.

DeSantis, L., & Ugarriza, D. N. (2000). The concept of theme as used in qualitative nursing research. Western Journal of Nursing Research, 22(3), 351-372.

Sandleowski, M., & Leeman, J. (2012). Writing usable qualitative health research findings. Qualitative Health Research, 22(10), 1404-1413.

We discuss this in detail in our TA book:

Braun, V., & Clarke, V. (2022). Thematic analysis: A practical guide. Sage.

We have written about this in this chapter:

Clarke, V., Braun, V., Terry, G., & Hayfield N. (2019). Thematic analysis. In Liamputtong, P. (Ed.), Handbook of research methods in health and social sciences (pp. 843-860). Springer.

We also discuss the difference in this lecture.

 

What is a central organising concept and why is it important in thematic analysis?

A central organising concept captures the essence of a theme. It is an idea or concept that captures and summarises the core point of a coherent and meaningful pattern in the data. If you can identify the central organising concept of a theme, you can capture the core of what your theme is about. If you cannot do this, your theme may lack coherence.

Reflexive TA in context: contrasts with other approaches to TA

What’s the difference between reflexive thematic analysis (e.g., ‘Braun & Clarke’) and other approaches?

We distinguish between three main types of TA – our reflexive approach, coding reliability TA and codebook approaches, which include methods like template analysis and framework analysis.

Let’s consider coding reliability approaches first, which are exemplified by the work of US researchers such as Boyatzis (1998) and Guest et al. (2012), and Joffe (2011) in the UK. Unsurprisingly, given the name ‘coding reliability’ approaches, this type of TA emphasises the measurement of the accuracy or reliability of coding through the use of a structured codebook and multiple independent coders, and the measurement of coding reliability (the level of ‘agreement’ between coders, calculated with statistical tests such as Cohen’s Kappa – a Kappa of >.80 indicates a very good level of coding agreement and supposedly reliable coding). While we would agree that it can be helpful to code data with another researcher (to bounce around ideas, and to reflect on your assumptions and what you might have overlooked in the data), this doesn’t necessarily result in ‘better’ coding (particularly in the sense of ‘more accurate’ or ‘more reliable’ coding), just different coding. This is why we don’t advocate the use of a coding frame, or the calculation of inter-coder reliability scores associated with coding reliability approaches. Coding reliability approaches to TA recommend the use of a coding frame precisely because this enables the calculation of inter-coder reliability scores. The use of inter-coder reliability scores is underpinned by the (realist/positivist) assumption that there is a reality in the data that can be accurately captured through coding, if appropriate techniques are used.

Our approach to coding is flexible and organic, and coding should evolve throughout the coding process – to some extent our approach to coding is similar to initial coding in grounded theory (Charmaz, 2006). We understand coding as an active and reflexive process that inevitably and inescapably bears the mark of the researcher(s). With no one ‘accurate’ way to code data, and meaning not viewed as fixed within data, the logic behind inter-coder reliability (and multi-independent coders) disappears. We argue that inter-coder reliability scores can be understood as showing that two researchers have been trained to code data in the same way, rather than that their coding is ‘accurate’ (Yardley, 2008). Furthermore, the use of a codebook often results in relatively superficial, coarse or concrete codes (precisely because of the need to facilitate multiple coders applying codes in the same way). We agree with Morse’s (1997) rather damning assessment that “maintaining a simplified coding schedule for the purposes of defining categories for an inter-rater reliability check […] will simplify the research to such an extent that all of the richness attained from insight will be lost” (Morse, 1997, p. 446). The positivist assumptions underpinning coding reliability approaches often limit the claimed theoretical flexibility of these types of TA.

Another important difference between coding reliability approaches and reflexive TA relates to the conceptualisation of themes as analytic inputs in coding reliability TA rather than analytic outputs (as in reflexive TA). In coding reliability TA, themes are developed early in the analytic process, either before the analytic process (and often based on data collection questions) or after data familiarisation. Coding is conceptualised as a process of searching for evidence for themes (rather than codes being the building blocks for themes as in reflexive TA). Furthermore, themes are often understood as topic summaries (as summaries of what participants said in relation to a particular topic or data collection question), rather than patterns of shared meaning underpinned by a central organising concept. We’ve also noticed that coding reliability proponents tend to describe themes as if they exist within the data prior to analysis and the researcher’s role is to find the themes within the data. This is very different from how themes are conceptualised in reflexive TA – as analytic outputs, created from codes and through the researcher’s active engagement with their data.

Coding reliability proponents often describe TA as an approach that bridges the positivist/quantitative and interpretative/qualitative divide. Whether or not this ‘bridging’ is viewed as possible, seems to hinge on how qualitative research is defined – as techniques and tools for collecting (and analysing) qualitative data (often underpinned by a positivist philosophy or paradigm), or as both techniques and an underlying qualitative philosophy. For coding reliability ‘bridgers’, qualitative research is implicitly defined as (merely) techniques and tools, and TA bridges the divide not by bringing two (arguably incommensurate) paradigms together (positivism and a qualitative paradigm), but by bringing together a positivist paradigm and qualitative techniques.

We turning now to codebook approaches – exemplified by approaches such as template analysis (e.g. King & Brooks, 2017), framework analysis (e.g. Ritchie & Spencer, 1994) and matrix analysis (Miles & Huberman, 1994). If we imagine TA on continuum with positivist, structured coding reliability approaches at one end, and reflexive TA at the other, these approaches sit somewhere in the middle. They share with coding reliability approaches the use of a more structured codebook approach to coding and the conceptualisation of themes as analytic inputs and topic summaries but not the positivist-inflected concern for measuring coding reliability. Instead codebook approaches share with reflexive TA a broadly qualitative philosophy or paradigm. In codebook TA, codebooks are also used to map or chart the developing analysis as well as guide data coding. Codebook approaches typically originated in applied fields and are shaped by pragmatic concerns such as the need to facilitate team working, with members of the team working separately to code the data (e.g. team members might each code 10 or 20 interviews from a wider dataset of 80 interviews), and some members having little or no prior experience of qualitative research. A codebook provides some reassuring and scaffolding structure for these team members. Other pragmatic concerns include, for example, in the framework approach originally developed for applied social policy research, but now also popular in fields such as health and nursing (e.g. Smith & Firth, 2011), working to a funder/commissioner’s tight deadline and meeting pre-determined information needs (e.g. identifying the barriers to, and enablers of, the successful implementation of a new social policy). Codebook methodologists have argued that these pragmatic concerns necessarily result in the compromise of some of the open and organic principles of qualitative research.

Literature

Boyatzis, R. E. (1998). Transforming qualitative information: Thematic analysis and code development. Sage.
Charmaz, K. (2006). Constructing grounded theory: A practice guide through qualitative analysis. Sage.
Guest, G., MacQueen, K. M., & Namey, E. E. (2012). Applied thematic analysis. Sage.
Joffe, H. (2011). Thematic analysis. In D. Harper & A. R. Thompson (Eds.), Qualitative methods in mental health and psychotherapy: A guide for students and practitioners (pp. 209-223). Wiley.
King, N., & Brooks, J. M. (2017). Template analysis for business and management students. Sage.
Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook, 2nd ed. Sage.
Morse, J. (1991). “Perfectly healthy, but dead”: The myth of inter-rater reliability. Qualitative Health Research, 7(4), 445–447.
Ritchie, J., & Spencer, L. (1994). Qualitative data analysis for applied policy research. In A. Bryman & R. G. Burgess (Ed.), Analysing qualitative data (pp. 173-194). Taylor & Francis.
Smith, J., & Firth, J. (2011). Qualitative data analysis: Application of the framework approach. Nurse Researcher, 18(2), 52-62.
Yardley, L. (2008). Demonstrating validity in qualitative psychology. In J. A. Smith (Ed.), Qualitative psychology: A practical guide to research methods (pp. 235-251). Sage.

What’s the difference between thematic coding and TA?

There is an approach to TA known as ‘thematic coding’ (e.g. Gibbs, 2007; Flick, 2014; Rivas, 2018), which most often involves the use of ground theory coding techniques, and other techniques such as memoing and constant comparison, to develop themes from data (rather than the categories and concepts associated with a grounded theory analysis). This approach to TA was particularly common before there were widely cited procedures for TA, but some qualitative researchers still advocate form the use of thematic coding (e.g. Flick, 2014; Rivas, 2018).

Gibbs’ (2007) account of thematic coding presents it as a generic technique for qualitative analysis, that informs many different analytic approaches, centred on the development of “a framework of thematic ideas” (p. 38) about data. As such there are many similarities between this approach and TA. Gibbs distinguishes between data-driven (descriptive) codes (close to the respondent’s terms) and concept-driven (analytic and theoretical) codes, a distinction that maps onto the semantic/latent coding distinction in TA (he also mentions ‘categories’ as a ‘mid-point’ between descriptive and analytic codes). Similarly, his distinction between data-driven and concept-driven coding maps on to inductive and deductive approaches to TA. He argues that these approaches to coding are not exclusive and most researchers move backwards and forwards between these different approaches. Similar to the use of codebooks in coding reliability and codebook forms of TA, he advocates for the use of coding memos – a staple of grounded theory – that record the label or name of the code, who coded it (if working in a team), the definition of the code (to ensure coding is systematic and consistent) and other notes of thinking about the code (including a record of any changes). But he also emphasises the importance of a flexible approach and warns researchers “not to become too tied to the initial codes you construct.” (p. 46) He discusses grounded theory as one of the most commonly used approaches to thematic coding and identifies line-by-line coding and constant comparison as particularly useful techniques.

Flick (2014, 2018) argued that his development of the thematic coding method in the 1990s pre-dates the development of TA, but it seems that both TA and thematic coding have been in use since at least the 1980s. Flick (2014, 2018) described his method for thematic coding as combining ideas from TA and grounded theory (specifically Strauss, 1987) to develop an approach that has both inductive and a-priori features and involves processes of simultaneous data collection and analysis and the analysis of each case (e.g. interview) in turn before developing an overall theory or thematic structure. The steps involve first producing a short description of each case, which is continuously checked and modified throughout the analysis, and then developing a thematic structure of “thematic domains and categories” (p. 425) for each case using grounded theory processes of open and selective coding. This process is repeated for each of the initial cases and then the researcher checks and compares the thematic structure developed for each case to produce an overall structure for the first few cases, which is then assessed for each further case and modified if necessary. The approach Flick describes is very similar to codebook types of TA – in that there is some delimiting of focus before the analysis, and the thematic structure, like a framework or matrix, allows for an emphasis on both cases and themes (and a comparison of cases and groups of participants).

Rivas (2018) describes thematic coding as similar to the early stages of a grounded theory, but with a focus on participants’ lived experience in and of itself rather than as a precursor to theory development. Like grounded theory, Rivas’ version of thematic coding involves memo writing, constant comparative analysis, and it can also involve some concurrent data collection and analysis, it begins with data familiarisation before moving to open coding and then the more interpretative stages of category formation (through grouping open codes and defining the categories) and finally the most interpretative stage of theme development (which can involve the use of concept or thematic mapping to review the themes). Like TA proper, this version of thematic coding can involve either or both inductive and deductive coding.

Does it matter that researchers claim to use (some) grounded theory procedures, but actually produce something more akin to a TA, than a grounded theory? Particularly now that TA is a well-developed method? Is insisting on the ‘proper’ use of methods (doing what you claim to do) simply a preoccupation with the purity of method – or proceduralism (King & Brooks, 2017: 231) – “where analysts focus on meticulously following set procedures rather than responding creatively, imaginatively (and systematically) to data”? We certainly agree with critics of proceduralism that rigid rules for good practice, insisting on one true way of applying a method, and avoiding interpretation, theory and taking reflexivity seriously, often leads to qualitative research that is rather limited – merely descriptive/summative, with little or no interpretation of the meaning of the data, implicitly peppered with positivism, and lacking depth of engagement, thoughtfulness and creativity. However, the combining of grounded theory and TA in some qualitative research doesn’t seem to result from thoughtfulness and creativity, or indeed theoretical knowingness and reflexivity, rather it seems to often stem from the researchers’ lack of knowledge, and narrow methodological range. Methodological practice is contested and debated, but not necessarily in a way that is helpful for those learning about qualitative research or developing their understanding of particular techniques or philosophical concerns; productive debate requires that researchers use techniques knowingly and are able to clearly articulate the assumptions underpinning these. On balance, we think this muddling of two distinct approaches only contributes to further confusion about qualitative research.

Literature

Flick, U. (2018). An introduction to qualitative research (6th ed.). Sage.
Gibbs, G. R. (2007). Analysing qualitative data. London: Sage.
King, N., & Brooks, J. (2018). Thematic analysis in organisational research. In C. Cassell, A. L. Cunliffe & G. Grandy (Eds.), The Sage handbook of qualitative business and management research methods (pp. 219-236). Sage.
Rivas, C. (2018). Finding themes in qualitative data. In C. Seale (Ed.), Researching society and culture, 4th ed. (pp. 429-453). Sage.
Strauss, A. L. (1987). Qualitative analysis for social scientists. Cambridge University Press.

Reflexive TA in context: contrasts with other approaches to analysis

What’s the difference between thematic analysis and IPA?

There are some differences between interpretative phenomenological analysis (IPA) and TA; however, the end result of an IPA and a TA analysis can often be very similar. Often the end result of an IPA doesn’t seem to be any different from a TA on a small sample.

Let’s consider the divergences between IPA and TA first, which result mainly from:

  1. The fact that IPA is better thought of as a methodology (a theoretically informed and delimited framework for how you do research) rather than a method (a technique for collecting/analysing data), whereas TA is closer to a method (but particular versions of TA are informed and delimited by particular research values and paradigmatic assumptions).
  2. Differences of procedure between IPA and TA.

We can think of IPA as like a piece of ready-made furniture – all the design choices (colour-scheme, dimensions, materials etc.) have been made for you. As well as outlining a range of analytic procedures, IPA specifies:

  • what the ontological and epistemological underpinnings of your research are (critical realism and contextualism; see Larkin, Watts & Clifton, 2006),
  • what theoretical framework should inform your research (descriptive and interpretative phenomenology),
  • what types of research questions you can ask (about people’s experiences and perspectives),
  • what sampling strategy you should use (homogenous, small N), and
  • how (ideally) you should collect data (first person accounts of personal experiences in qualitative interviews) (see Smith, Flowers & Larkin, 2009).

Obviously, we are simplifying the characteristics of IPA a bit, but the main point to take away is that IPA provides an entire framework for conducting research.

By contrast, TA is like a piece of bespoke furniture you design and build yourself, where you choose your colour scheme, the dimensions of the piece, the materials, etc. It has the same broad elements as a ready-made piece of furniture but you have choices to make (e.g. between pine, oak, plywood, particle board…). Because TA is closer to a method and the hallmark of TA is its flexibility:

  • It can be used widely across the epistemological and ontological spectrum (TA can be [critical] realist or constructionist),
  • It can be underpinned by phenomenology, as well as by any number of explanatory or political theories,
  • It can be used to address a wide range of research questions (including questions about people’s experiences and perspectives),
  • There are no specific requirements for the size, constitution or selection of a participant group/dataset in TA (we would generally recommend a larger N than for IPA studies because TA does not share IPA’s ideographic focus – more on this below – but some degree of homogeneity in sampling can be pragmatically useful in smaller studies), and
  • It can be used to analyse most types of qualitative data (interviews, focus groups, diaries, qualitative surveys, secondary sources, vignettes, data generated by ethnographic and observational methods, and participatory designs, creative methods like story completion tasks, visual methods etc.)

IPA has a dual focus on the unique characteristics of individual participants (the idiographic focus mentioned above) and on patterning of meaning across participants. In contrast, TA focuses mainly on patterning of meaning across participants (this is not to say it can’t capture difference and divergence in data).

In terms of analytic procedures, both IPA and TA involve coding and theme development, but these processes are somewhat different for each approach. Coding in TA begins after a process of data familiarisation, in which the researcher notes any initial analytic observations about each data item and the entire data-set. The researcher then codes across all of the data items. The researcher either collates the data relevant to each code as they code, or they collate all the relevant data at the end of the coding process. By contrast, coding in IPA consists of a process of ‘initial commenting’ or ‘initial noting,’ in which the researcher writes their initial analytic observations about the data on the data item (if working with interview transcripts, initial notes are usually recorded in a wide margin on one side of the transcript). These initial notes are brief commentaries on the data (rather than succinct codes). This means initial noting in IPA lies somewhere between data familiarisation and coding in TA.

Another difference is that in IPA, the researcher codes their first data item then progresses to developing themes for that data item, rather than coding across the entire dataset, and then progressing to theme development. So IPA focuses on developing each stage of the analysis for each data item, before moving to the next; whereas TA involves developing each stage of analysis across the whole dataset.

With regard to types of code/initial comments, IPA refers to both ‘descriptive’ and ‘conceptual’ comments and these are very similar to ‘semantic’ and ‘latent’ codes in TA. IPA also refers to linguistic comments centred on the participant’s use of language.

In terms of procedures for theme development, there are two levels of theme development in IPA and one level in TA. In IPA, these are referred to as ‘emergent’ and ‘superordinate’ themes. Emergent themes are noted on the data item (if working with interview transcripts, emergent themes are usually recorded in one side of the transcript). Superordinate themes are developed from emergent themes. Once coding and theme development are complete for each data item, the researcher develops superordinate themes across the dataset. In TA, themes are developed from the codes (and collated data), across all data items.

As a general rule, there will be a lot more emergent themes generated from an IPA compared to the number of themes generated from a TA; this means that emergent themes in IPA lie somewhere between codes and themes in a TA (somewhat akin to subthemes in a TA). An IPA will generate roughly the same number of superordinate themes as the number of themes generated from a TA. However, another difference is that superordinate themes in an IPA simply provide an organising framework for the analysis, it is the emergent themes that are discussed in detail in the write-up.

Overall, IPA procedures help the researcher to stay close to the data, because you develop codes and themes on the actual data item, and to focus on the unique characteristics of each individual participant, because you code and develop themes for each data item in turn. By contrast, the procedures of TA help the researcher to identify patterns across the entire data-set.

Despite these procedural differences, the end result of an IPA and a phenomenologically-informed TA of interview data can potentially be quite similar – analytic procedures play some role in shaping analysis, but, in general, our conceptual lenses play a much greater role.

So what does this all mean for choosing whether to use IPA or TA? We’d recommend using TA when you want to address research questions that are not about people’s experiences or perspectives, when you want to use data that do not capture first-person accounts of personal experiences (such as focus groups or story completion tasks), and when working with larger and more diverse datasets. If you are asking an experiential-type question within a phenomenological framework based on the analysis of interview data – how do you choose between IPA and TA? There’s not a lot in it, but we’d recommend using IPA if you are working with a smaller participant group/dataset and want to maintain a more idiographic focus, and TA if you are working with a larger participant group/dataset and want to focus more on patterned meaning across the data-set. We sometimes encounter the assumption that if you’re doing phenomenological research, you MUST use IPA. Therefore, it’s important to note that there is a long and diverse history of phenomenological research in the social and health sciences (see Langdridge, 2007 for an introduction to phenomenological research in psychology). Furthermore, TA has a long history as a phenomenological method that predates the development of IPA (see Braun & Clarke, 2021, 2022).

Literature

Braun, V., & Clarke, V. (2022). Thematic analysis: A practical guide. Sage.
Braun, V., & Clarke, V. (2021). Can I use TA? Should I use TA? Should I not use TA? Comparing reflexive thematic analysis and other pattern‐based qualitative analytic approaches. Counselling and Psychotherapy Research, 21(1), 37-47.
Langdridge, D. (2007). Phenomenological psychology: Theory, research and method. Pearson Education.
Larkin, M., Watts, S., & Clifton, E. (2006) Giving voice and making sense in interpretative phenomenological analysis. Qualitative Research in Psychology, 3(2), 102-120.
Smith, J. A., Flowers, P., & Larkin, M. (2009) Interpretative phenomenological analysis: Theory, method and research. Sage.

What’s the difference between thematic analysis and grounded theory?

For a start, that depends on what is meant by grounded theory! There are seemingly endless varieties and versions of grounded theory, with different theoretical underpinnings and different procedures (for a useful overview, see Birks & Mills, 2011, 2015).

One reason for the existence of many difference approaches to grounded theory is that it represents one of the first attempts to develop a systematic method for analysing qualitative data. Grounded theory was developed in the 1960s (Glaser & Strauss, 1965, 1967) in the very early days of the (re)emergence of qualitative methods in the social sciences, and a lot has changed since then! For example, one famous recommended practice in early versions of grounded theory was not engaging with the relevant literature prior to beginning data analysis, to avoid the analysis being shaped by preconceptions from existing research, rather than being truly grounded in the data (this practice has rather assumed the status of a commandment ‘Thou shalt not…’).

More than four decades later, there’s a lot of grounded theory out there! This means that at the very least a researcher should determine what grounded theories exist relevant to their topic. Furthermore, in the 1960s, when this recommendation was developed, researchers did not routinely submit ethics applications prior to beginning their research. Today, ethics applications (and often other forms of research proposals) are mandatory, and therefore some engagement with the literature is required to produce such applications. Grounded theorists now tend to recommend approaching research with an ‘open mind but not an empty head’ (see Charmaz, 2006, 2014).

Unsurprisingly, there is now a lot of discussion and debate about this and other elements of grounded theory procedure, with lots of different (competing) recommendations for how to perform a grounded theory.

We find that it’s useful to make a distinction between a full grounded theory and an abbreviated grounded theory; the latter we have dubbed grounded theory-lite. A full grounded theory requires the implementation of the full range of grounded theory procedures, including theoretical sampling, with the aim of producing a theory grounded in data. Here’s one notable difference from TA; although TA can produce conceptually-informed interpretations of data, it does not attempt to develop a theory. A full grounded theory is something that is only achievable in a larger research project where there are the resources and time to implement practices like theoretical sampling (Pidgeon & Henwood, 1997). Charmaz and Thornberg (2020) argued that what distinguishes ground theory methods from other pattern-generating approaches is the use of theoretical sampling, more than anything else. Theoretical sampling involves recurrent processes of data collection and data analysis and the developing analysis informs the selection of further participants/data sources. A grounded theory-lite involves using grounded theory coding techniques, and other techniques like memoing and constant comparison, to develop a set of categories (and concepts), and an understanding of the relationship between the various categories (and concepts) (Pidgeon & Henwood, 1997). This is probably the most frequently used form of grounded theory. The practice of TA is similar to grounded theory-lite. Both involve coding and the generation (and interpretation) of broader patterns in data. Differences in terminology – for example, categories versus themes – can mask the underlying similarities.

Grounded theorists are also rather keen on the development of graphic or textual models (to represent their analysis and category structure or the relationship between categories), and that is not something we particularly advocate for TA (which is not to say it can’t be done – this paper provides a nice example of a tentative model developed from a TA: Anderson & Clarke, 2019). (We do recommend the use of thematic maps in the theme development process, and these can be included in your appendices, if writing a dissertation/thesis.)

A key difference between grounded theory (full or lite) and TA is that like most other analytic approaches grounded theory is a methodology (a theoretically informed and delimited framework for research), not just a method (TA is closer to a method, but particular iterations of TA are delimited by particular research values and paradigmatic assumptions). All versions of grounded theory have an inbuilt theoretical framework (and ontological/epistemological assumptions), advocate the use of particular types of research question (a focus on social processes or the factors that influence particular phenomenon is common in grounded theory), and the use of particular methods of data collection (qualitative interviews are a key method, but there are other possibilities, see Charmaz, 2006, 2014), as well as the use of a particular set of analytic procedures (coding, memoing, etc.). However, the results of a contextualist grounded theory-lite and a contextualist TA could be rather similar.

So, what does all this mean for choosing grounded theory(-lite) or TA? We’d recommend using TA:

  • when you want to address research questions that are not focused on social processes or influencing factors,
  • when using methods other than interviewing, and/or
  • when you are new to qualitative research (we think TA is more accessible and easier to implement than grounded theory).

Literature

Anderson, S., & Clarke, V. (2019). Disgust, shame and the psychosocial impact of skin picking: Evidence from an online support forum. Journal of Health Psychology, 24(13), 1773-1784.
Birks, M., & Mills, J. (2015). Grounded theory: A practical guide (2nd ed.). Sage.
Birks, M., & Mills, J. (2011). Grounded theory: A practical guide. Sage.
Charmaz, K. (2014). Constructing grounded theory (2nd ed.). Sage.
Charmaz, K. (2006). Constructing grounded theory: A practice guide through qualitative analysis. Sage.
Charmaz, K., & Thornberg, R. (2020). The pursuit of quality in grounded theory. Qualitative Research in Psychology.
Glaser, B. G., & Strauss, A. L. (1965). Awareness of dying. Aldine.
Glaser, B. G., & Strauss, A. L. (1967). The discovery of grounded theory: Strategies for qualitative research. Aldine.
Pidgeon, N., & Henwood, K. (1997a). Using grounded theory in psychological research. In N. Hayes (Ed.), Doing qualitative analysis in psychology (pp. 245-273). Psychology Press.

We have written about the differences between grounded theory and TA in the following paper: Braun, V., & Clarke, V. (2021). Can I use TA? Should I use TA? Should I not use TA? Comparing reflexive thematic analysis and other pattern‐based qualitative analytic approaches. Counselling and Psychotherapy Research, 21(1), 37-47.

What’s the difference between constructionist thematic analysis and discourse analysis?

There is potentially little difference between a constructionist – or ‘critical’ – TA and a thematic discourse analysis (DA) – such labels could be used to describe very similar (even identical) analytic approaches.

In general, however, we view a constructionist TA as the application of our recommended analytic procedures (including the identification of codes and themes) for TA within a constructionist theoretical framework, whereas in thematic DA, codes, themes and discourses – “underlying systems of meaning” – are identified (Taylor & Ussher, 2001, p. 297), and the analytic process might be more fluid and organic. Furthermore, as with most DA approaches, thematic DA involves attention to the constructive role of language, and with multiple and shifting meanings around social objects (Taylor & Ussher, 2001). But it retains a specific interest in patterned meaning (discourses) within the dataset.

For the novice ‘critical’ researcher, TA is a far more accessible method, as it has a clearly defined set of procedures, rather than relying on less clearly defined analytic practices such as ‘craft skills’ (Potter & Wetherell, 1987). Furthermore, although constructionist or critical TA recognises the constitutive nature of language and discourse, it does not generally involve a micro analysis of language use. Therefore, it does not require a technical knowledge of language practice which is a feature of some forms of DA (see Wiggins, 2016), and which are useful for developing the nuances of a wide range of discursive approaches including thematic discourse analysis.

Critical TA can be applied flexibly within a wide range of critical frameworks, from symbolic interactionism and psychodynamism (e.g. Nicolson & Burr, 2003) to feminist poststructuralism (e.g. Farvid & Braun, 2006). It can be used to analyse almost all forms of qualitative data (and there is no ideal form of data, unlike in some versions of DA), both small and large data-sets, and to address most of the different types of research question posed by constructionist and critical researchers. This means that critical TA potentially has a wide range of uses than thematic DA.

Literature

Farvid, P., & Braun, V. (2006). ‘Most of us guys are raring to go anytime, anyplace, anywhere’: Male and female sexuality in Cleo and Cosmo. Sex Roles, 55, 295-310.
Nicolson, P., & Burr, J. (2003). What is ‘normal’ about women’s (hetero)sexual desire and orgasm?: A report of an in-depth interview study. Social Science & Medicine, 57, 1735-1745.
Potter, J., & Wetherell, M. (1987). Discourse and social psychology: Beyond attitudes and behaviour. Sage
Taylor, G. W., & Ussher, J. M. (2001). Making sense of S&M: A discourse analytic account. Sexualities, 4(3), 293-314.
Wiggins, S. (2016). Discursive psychology: Theory, method and applications. Sage.
We have written about the differences between discourse analysis and TA in the following paper:
Braun, V., & Clarke, V. (2021). Can I use TA? Should I use TA? Should I not use TA? Comparing reflexive thematic analysis and other pattern‐based qualitative analytic approaches. Counselling and Psychotherapy Research, 21(1), 37-47.
We have also written about the differences between thematic discourse analysis and constructionist TA here:
Clarke, V., & Braun, V. (2014). Thematic analysis. In T. Teo (Ed.), Encyclopedia of critical psychology (pp. 1947-1952). Springer.
Gareth Terry reflects on his experiences of combining thematic analysis and discourse analysis in our TA book: Braun, V., & Clarke, V. (2022). Thematic analysis: A practical guide. Sage.

What’s the difference between thematic analysis and (qualitative) content analysis?

As a starting caveat, it’s important to recognise that TA is not one approach – there are three different broad approaches (which we call ‘coding reliability’, ‘codebook’ and ‘reflexive’); Our approach is a form of reflexive TA. So how you make sense of the differences between content analysis and TA, in part depends on how you define TA (and content analysis).

Historically the terms ‘content analysis’, ‘qualitative content analysis’ and ‘thematic analysis’ have been used interchangeably to refer to often rather similar approaches to qualitative data analysis. The terms content analysis/qualitative content analysis are less popular among qualitative social researchers, particularly in countries where qualitative approaches have flourished in the last few decades, and where ‘branded’ approaches to qualitative analysis (e.g., grounded theory; interpretative phenomenological analysis) have developed and provided researchers with systematic procedures for analysing qualitative data. The term ‘thematic analysis’ is now associated with a distinctive set of procedures – the Braun and Clarke approach is the most widely used, but the coding reliability approach advocated by US researchers such as Boyatzis (1998) and Guest et al. (2012) (and Joffe in the UK) has also grown in popularity over the last decade or so. Content analysis is less clearly branded, with lots of different versions and varieties (none of which are particularly widely used, within the qualitative social/health research fields). CA and TA can be very similar/identical, or they can be very different: it depends on how people make sense of, and use, both of these methods. In practice, CA seems to have most in common with more structured coding reliability and codebook approaches to TA. This means that by and large reflexive TA offers researchers a distinctive method. Like TA, CA is seen as a method rather than a methodology; however, the theoretical underpinnings of (particular enactments of) CA are rarely discussed. Thus, CA is often understood as an atheoretical method (at the same time as positivist assumptions are often imported into the method through the use of coding reliability measures). Our view is that qualitative analysis can never be atheoretical, we always make theoretical assumptions whether acknowledged or not. For this reason we advocate for the use of TA over CA as it is understood as a theoretically flexible method not an atheoretical one.

Literature

For a more detailed discussion, see:

Braun, V., & Clarke, V. (2021). Can I use TA? Should I use TA? Should I not use TA? Comparing reflexive thematic analysis and other pattern‐based qualitative analytic approaches. Counselling and Psychotherapy Research, 21(1), 37-47.

Doing Reflexive TA

I’ve collected five interviews – is that enough for a TA?

The size of your participant group/dataset does depend on the size and scope of your project and other factors such as the richness of individual data items and the dataset as a whole, but generally speaking 5 interviews (however long and detailed) is probably a bit too small for a TA. This is because TA focuses on the identification of patterns across data. We provide detailed information about participant group/dataset size in our book Successful Qualitative Research – but in brief, we recommend a participant group/dataset size of:

  • 6-10 (rich and detailed) interviews for a small TA project (e.g. UK undergraduate; NZ Honours)
  • 10-20 for a medium TA project (e.g. UK or NZ Masters; UK Professional Doctorate)
  • 30+ for a large TA project (e.g. NZ or UK PhD; NZ Professional Doctorate).

You also need to keep in mind that if you are interested in publishing your research, a participant group/dataset size may limit your choice of publication outlets to less prestigious journals, and journals that specialise in publishing qualitative research. More prestigious journals, North American journals and journals that don’t specialise in publishing qualitative research are likely to expect larger participant groups/datasets. We’ve had experiences where TA participant groups/datasets of around 20 interviews have been rejected outright from journals as being ‘too small a sample’.

Why are we critical of the notion that ‘themes emerged’ from data?

Within our reflexive TA approach, we are very clear that we do not conceptualise themes as ‘emerging’ from data and that the idea that they do is problematic (e.g. Braun & Clarke, 2006, 2013). This language suggests that meaning is self-evident and somehow ‘within’ the data waiting to be revealed, and that the researcher is a neutral conduit for the revelation of said meaning. In contrast, we conceptualise analysis as a situated and interactive process, reflecting both the data, the positionality of the researcher, and the context of the research itself. Our position around this is not unique or particularly radical. Researchers within a qualitative paradigm tend to treat research as a subjective process. Given that understanding, it is disingenuous to evoke a process whereby themes simply emerge, instead of being active co-productions on the part of the researcher, the data/participants and context. 

Our objection applies specifically to people who claim that ‘themes emerged’ when using our reflexive TA approach. Although this position reflects our conceptualisation around qualitative research process, it’s not universal; other qualitative analytic approaches do use the language of emergence (e.g. IPA) in a specific way, often to express the idea that themes were generated inductively from data.

We have written more about this in our TA book: Braun, V., & Clarke, V. (2022). Thematic analysis: A practical guide. Sage.

Why don’t we advocate multiple-coders and inter-rater reliability for reflexive TA?

The use of multiple-coders working independently to code the same data using a codebook or coding frame and the subsequent measuring of inter-coder reliability is underpinned by the (realist/positivist) assumption that there is a reality in the data that can be accurately captured through appropriate coding techniques; that meaning is fixed within data. Our approach to TA regards data meanings as open to multiple interpretations by subjective and situated researchers. We understand data coding as an subjective and reflexive process that inevitably and inescapably bears the mark of the researcher(s). With no one ‘accurate’ way to code data, the logic behind inter-coder reliability (and multi-independent coders) disappears. We argue that inter-coder reliability scores can be understood as showing that two researchers have been trained to code data in the same way, rather than that their coding is ‘accurate’ (Yardley, 2008). This doesn’t mean you can’t collaborate with other researchers in reflexive TA – but this isn’t understood as focused on achieving an accurate interpretation of data, but rather as deepening your reflexive engagement with the data.

Literature

Yardley, L. (2008). Demonstrating validity in qualitative psychology. In J. A. Smith (Ed.), Qualitative psychology: A practical guide to research methods (pp. 235-251). Sage.
We have written more about this in our TA book: Braun, V., & Clarke, V. (2022). Thematic analysis: A practical guide. Sage.

NVivo uses codebooks, does that mean it’s incompatible with reflexive TA?

The short answer here is no, it’s not incompatible per se. But it depends on how you understand and use the codebook you can generate through NVivo. So, we need a longer answer.

Reflexive TA avoids codebooks, because coding – the process of tagging with a coding label meaning-relevant segments of your dataset – is organic and open, and can and should evolve as your analytic process progresses. Your understandings will change – deepen, get more nuanced – through the coding process. Within codebook-based approaches to TA – including what we term coding reliability approaches – a codebook (or template, or framework) is developed relatively early on in the analytic process, and used to ‘identify’ relevant data associated with the codes specified. In some cases, the codebook is relatively fixed, in others, it can evolve somewhat. One aim is to use the codebook to check for/determine consistency in coding, across a dataset or across multiple coders.

In reflexive TA, once you determine that your coding process is ‘good enough’ and you’re ready to move into theme development, you often produce a listing of all your different codes, from which you start to explore possible themes. In NVivo, a codebook is generated from your codes (in previous versions of NVivo, these were called nodes). It can be used in reflexive TA in a similar way to a manually generated listing of codes – so not as a codebook that is then systematically ‘applied’ to the dataset to determine coding, but rather as a way to start processes of clustering and meaning-mapping. If you have used a lot of code layers, an important caveat is not to treat this initial layering as your analysis – you’re still a long way off that point.

What it comes down to is being a knowing researcher – understanding what the process of TA is, and what it allows (or constrains), and so how to use the technology in a way that is compatible with the values of the approach you’re using – whether reflexive TA or another TA method.

How many themes should I have?

Unfortunately, there is no precise formula for determining the number of themes in a TA, as it depends on the data, the research question, the length of the analytic report, and the focus of the analysis…

A TA can range from a detailed reporting of one theme (usually with subthemes) to a comprehensive overview of dataset, in relation to your research question. In the former, you will go into more depth; in the latter, probably less.

But regardless, you don’t want your analysis to be thin, and fall victim to common errors in TA, such as paraphrasing the data, and not actually providing an interpretative analysis. If you have a report of about 10,000 words, an overview is unlikely to be able to sufficiently cover more than six themes (including subthemes) in any depth. In general, 2-6 themes (and subthemes) is about right for a single journal article, an undergraduate project, an Honours or Masters dissertation, and a single analytic chapter in a doctoral thesis.

Are latent codes/themes better than semantic ones?

TA comes in lots of different shapes and forms. None are inherently better than any other, but they serve different purposes. If, in your analysis, you want to present a more (critical) realist and descriptive account of participants’ experiences of eating, for example, then semantic codes are likely better than latent ones.

If you want to present a more constructionist account of the assumptions underpinning media reports of women and weight gain, then latent codes are likely better. Latent codes allow researchers to move away from the explicit and obvious content of the data. Regardless of approach however, your analysis needs to interpret the data and make sense of them for the reader.

Reflexive TA: What is it good for?

What types of data is reflexive thematic analysis suitable for?

There is no ideal type of data for a TA analysis. TA can be used to analyse most types of qualitative data including interviews, focus groups, qualitative surveys, diaries, vignettes, a wide range of secondary sources including printed materials, online and electronic materials, and broadcast media and film, and data generated from creative methods such as story completion tasks, visual methods, participatory designs and observational and ethnographic methods. Check out our resources section for examples of papers using TA across a range of different data types.

What types of research questions is reflexive thematic analysis suitable for?

We can understand qualitative research as addressing different types of research questions, and TA can be used to address most of these, including questions about:

  • Individual experiences (e.g. What is it like to experience depression? How do people make sense of being fat?)
  • People’s views and opinions (e.g. What are people’s views on living near wind farms? How do people make sense of transracial adoption?)
  • People’s behaviours or practices – the things they do in the world, or their accounts or perceptions of their practices (e.g. How do married heterosexual couples manage their household finances? What role do clothing and adornment play in the development of a gay or lesbian identity?)
  • The reasons why people think or feel or do particular things and the factors or processes that underpin and shape particular experiences or decisions (e.g. What factors influence the decision to become a vegetarian? How do social norms shape the experience of chronic pain?)
  • Identifying and exploring the rules and norms that govern particular social practices (e.g. What are the expectations and conventions governing the behaviour of new fathers? What norms and cultural values do people navigate in using online ‘dating’ sites?)
  • How particular social objects are represented in particular contexts (NB a social object can be both something concrete and something abstract) (e.g. How is ‘female sexuality’ represented in men’s magazines? How is ‘the family’ represented in Christmas advertising?)
  • How social objects are constructed, interrogating the discourses surrounding a particular social object (How is the notion of a ‘healthy body weight’ constructed in focus groups with young women? How do young men construct ‘masculinity’?).

What all these questions have in common is a focus on what people say, on the content of the language used in the data. The only types of research question that do not suit a TA are ones focused on language practice – how people say things – and narrative structures. Language practice questions are typically associated with discursive psychological and conversation analytic approaches (and some forms of narrative analysis). Narrative structure questions are associated with some forms of narrative analysis.

Quality in reflexive TA research

What are some of the common problems I should avoid when using reflexive TA?

There is now a lot of published TA. In looking at your own, and evaluating the TA of others, the following questions will help in identifying whether common problems in TA have been avoided:

  • Have the data actually been analysed, or have they just been reported and paraphrased? Does the analysis go beyond the content of the data to tell the reader something about what the data might mean, and the implications of the patterns identified? Data need to actually be analysed.
  • Does the analysis go beyond data collection questions? Does the analysis report patterns that don’t simply translate from the questions participants were asked? In general, patterns should not map directly from data collection questions.
  • Does the researcher claim the themes ‘emerged’? This is a key problem, as it suggests themes ‘exist’ in the data waiting to be discovered, when analysis is really a more active process of developing themes through our interaction with the data.
  • Are there a lot of data extracts and very little analytic narrative? Or are analytic claims made without supporting evidence from the data? Aim for a good balance of data and analytic narrative.
  • Are themes weak and poorly realised? Are they thin and sketchy or overly complex and unwieldy? Do they lack coherence? Aim for themes that are coherent and have a clear and identifiable central organising concept.
  • Is there lots of overlap between the themes? Is the relationship between the themes unclear? Do the themes fail to work together to tell a story about the data? Aim for a set of themes in which each theme distinctive from the other themes, but there is also a clear relationship between the different themes.
  • Overall, do the themes provide a rich and coherent analysis that answers the research questions? The TA needs to provide a compelling and plausible answer to the research question.
  • Is the analysis plausible? Will the reader be able to see what you claim to see in the data? Your analytic claims need to be able to be ‘seen’ in and evidenced by the data.
  • Are the assumptions around TA – and the form of TA used – clearly spelled out? The approach you use needs to be explained. Keep in mind, there is more than one type of TA; avoid ‘mashing-up’ approaches that have different philosophies and procedures.
  • Is the analysis theoretically coherent? Do the claims made about the approach to TA used (inductive vs. deductive; latent vs. semantic; essentialist vs. constructionist) and how the data are actually analysed, fit? Your use of TA needs to be theoretically coherent and consistent (e.g., if you say your analysis is deductive and constructionist, it needs to be that). Simply put, you need to do what you say you do!

We have written a checklist for editors and reviewers, but also for researchers, to help people evaluate TA papers submitted for publication and to ensure high standards in published (reflexive) TA. This checklist will also be useful for anyone critically evaluating a TA report.

How do I know if I have done a good reflexive thematic analysis?

If you follow the guidance we have provided for doing reflexive TA (e.g. Braun & Clarke 2006, 2012, 2013) and use the checklist below to avoid ‘common problems’, your will have followed a robust process, and you are well on your way to doing a good TA. However, there are no guarantees. Qualitative analysis of all types relies on interpretative critical analytic skills of the researcher. Therefore, the quality of a TA also depends on your analytic insights… So doing a good TA is a combination of following a robust process, taking a theoretically coherent analytic orientation to the data, and interpreting them light of what we already know about the issue(s) being explored.

A 15-point checklist of criteria for good thematic analysis, updated from our 2006 original checklist (reproduced from Braun & Clarke, 2022)

  1. The data have been transcribed to an appropriate level of detail; all transcripts have been checked against the original recordings for ‘accuracy’.
  2. Each data item has been given thorough and repeated attention in the coding process.
  3. The coding process has been thorough, inclusive and comprehensive; themes have not been developed from a few vivid examples (an anecdotal approach).
  4. All relevant extracts for all each theme have been collated.
  5. Candidate themes have been checked against coded data and back to the original dataset.
  6. Themes are internally coherent, consistent, and distinctive; each theme contains a distinct central organising concept; any subthemes share the central organising concept of the theme.
  7. Data have been analysed – interpreted, made sense of – rather than just summarised, described or paraphrased.
  8. Analysis and data match each other – the extracts evidence the analytic claims.
  9. Analysis tells a convincing and well-organised story about the data and topic; analysis addresses the research question.
  10. An appropriate balance between analytic narrative and data extracts is provided.
  11. Enough time has been allocated to complete all phases of the analysis adequately, without rushing a phase, or giving it a once-over-lightly (including returning to earlier phases or redoing the analysis if need be).
  12. The specific approach to thematic analysis, and the particulars of the approach, including theoretical positions and assumptions, are clearly explicated.
  13. There is a good fit between what was claimed, and what was done – i.e. the described method and reported analysis are consistent.
  14. The language and concepts used in the report are consistent with the ontological and epistemological positions of the analysis.
  15. The researcher is positioned as active in the research process; themes do not just ‘emerge’.

In addition, our more recently developed guidelines for editors and reviewers to assess submitted manuscripts provides another guideline towards quality reflexive TA.

Literature

Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77-101.
Braun, V., & Clarke, V. (2012). Thematic analysis. In H. Cooper (Ed.), Handbook of Research Methods in Psychology (Vol 2: Research Designs, pp. 57- 71). Washington, DC: APA books.
Braun, V. & Clarke, V. (2013). Successful qualitative research: A practical guide for beginners. London: Sage.
Braun, V., & Clarke, V. (2022). Thematic analysis: A practical guide. Sage

Should someone else check my coding?

It depends on what you mean by ‘check’! If you mean check to see if it is accurate or reliable, or that the person checking ‘agrees’ with how you have coding your data, then we would say no. The assumptions underpinning that type of ‘check’ are not consistent with our reflexive TA approach, which foregrounds researcher subjectivity in the coding and theme development processes. From this perspective, coding can’t be right or wrong, but it can be weaker (superficial) or stronger (complex, nuanced, insightful). Coding is always reflective of the subjectivity of the researcher, and one’s person’s subjectivity can’t be correct and another’s incorrect.

However, if you conceptualise ‘check’ as sharing your coding with an experienced supervisor (say) and using this as an opportunity to reflect on how you have coded the data, the assumptions you’re making in coding the data, and things you might have overlooked, then this is a good idea. Working with a more experienced researcher can help us to develop our skills in coding and theme development.

Should I use numbers when reporting themes?

Qualitative researchers often use expressions like ‘a common theme…’, ‘in a majority of the texts analysed…’, or ‘many participants commented that…’ when reporting TA. But wouldn’t it be better to use expressions like ‘25/34 participants…’ or ‘45% of the texts…’? Although occasionally reporting percentages or frequencies is useful, in general we argue no, it not better to report percentages or frequencies. As the reason for this stance isn’t necessarily obvious, and can be a big issue, especially as those outside a qualitative paradigm often think that indicating the actual number of participants/data items or proportion of the dataset reporting a theme provides more robust evidence and validity for a qualitative analysis, we will explain our position.

There are lots of reasons:

  • It reflects an anxiety about the validity of qualitative research practice, to some extent suggesting that somehow our analysis might not be real (i.e., it might be ‘made up’), or our themes might be ‘anecdotal’ rather than patterned. But all research (qualitative or quantitative) relies on trust, honesty, and good research practices. Reporting actual numbers does not circumvent this issue.
  • We agree with Australian health researcher Prisicilla Pyett (2003) who argued that “counting responses misses the point of qualitative research” (p. 1174), as frequency does not determine value.
  • Moreover, whether something is insightful or important for answering our research questions is not necessarily determined by whether large numbers of people said it.
  • Finally, because of the nature of qualitative data, we cannot assume what the absence of a certain meaning or theme in the data actually means. Consider the difference between a quantitative survey, and qualitative data collected in interview or focus group. In a quantitative survey, you ask people to select from a number of options. Reporting, and comparing, the proportions who select each of a series of response options is meaningful, because they have all been asked the same thing, and given the same response options. With an interview or focus group, the data generated from each participant can be quite different. Because, for example interviews are fluid, flexible, and interactive data collection tools, it’s not the case that every participant in an interview study discusses exactly the same issues. So if someone doing an interview study with 20 men reported that ‘12 of the men thought …’, we can’t assume that the remaining eight men didn’t think this, or thought the opposite – they may have just not discussed it. So we have no way of interpreting what is not reported in qualitative data, and this makes reporting numerical proportions somewhat deceptive and disingenuous.

Literature

Pyett, P. M. (2003). Validation of Qualitative Research in the “Real World”. Qualitative Health Research, 13(8), 1170-1179.

Teaching and supervision

Should I teach my students thematic analysis?

Yes! We teach TA to both our undergraduate and postgraduate students, and our interest in TA as a method developed from our experiences of teaching TA to our students.

In many ways, TA is an ideal ‘starter’ analytic method for those new to qualitative research because it is accessible, flexible, and involves analytic processes common to most forms of qualitative research. The theoretical flexibility of TA means it can be learned without some of the potentially bewildering (for new students) theoretical knowledge essential to many other qualitative analytic approaches. In our experience, understanding the practice of qualitative (thematic) analysis first seems to allow space for the theory of qualitative research to make sense for students new to qualitative methods. Students can progress from learning TA to learning other analytic approaches such as grounded theory, IPA and discourse analysis, or progress from producing largely descriptive TA to producing rich and complex, conceptually informed TA.

We provide some guidance on teaching TA in the following paper: Clarke, V. & Braun, V. (2013). Teaching thematic analysis: Overcoming challenges and developing strategies for effective learning. The Psychologist, 26(2), 120-123.

Through the companion website for our book Thematic analysis: A practical guide you will be able to access an online-only chapter on Teaching, supervising, and examining for quality thematic analysis. This chapter provides lots of hints and tips for teaching and supervising TA.

 

Is thematic analysis sophisticated enough for a doctoral project?

We sometimes hear from students and supervisors that TA is not perceived as sophisticated enough for doctoral level research, which is why we have included a FAQ on this issue. TA isn’t inherently sophisticated or unsophisticated. Like with all qualitative analytic approaches, what really matters, is how you implement it. We emphasise the word ‘you’ because the active role of the researcher is key to the successful implementation of any qualitative analytic approach.

We have puzzled over why people perceive TA as unsophisticated and we think this view is based on a fundamental misunderstanding of TA. TA is often assumed to be a realist/essentialist method or simply an atheoretical method for describing patterns in data. But this is not the case! TA is theoretically flexible, which means it can be used across a wide range of theoretical approaches. What is the key here is that the researcher chooses which theoretical framework will inform their use of TA. TA can be used to produce relatively straightforward descriptive overviews of the key features of the semantic content of data (within a critical realist/essentialist framework) or complex and sophisticated conceptual interrogations of the underlying meaning in data (within say a constructionist framework).

As this indicates, TA is not one thing – it can be used to produce a relatively straightforward (what some might call ‘unsophisticated’) analysis or one that is as sophisticated as an IPA, grounded theory or discourse analysis. So yes, TA definitely provides the tools to do analysis sophisticated enough for a doctoral project (assuming it’s used well!).