Grad Coach

Qualitative Data Analysis Methods 101:

The “big 6” methods + examples.

By: Kerryn Warren (PhD) | Reviewed By: Eunice Rautenbach (D.Tech) | May 2020 (Updated April 2023)

Qualitative data analysis methods. Wow, that’s a mouthful. 

If you’re new to the world of research, qualitative data analysis can look rather intimidating. So much bulky terminology and so many abstract, fluffy concepts. It certainly can be a minefield!

Don’t worry – in this post, we’ll unpack the most popular analysis methods , one at a time, so that you can approach your analysis with confidence and competence – whether that’s for a dissertation, thesis or really any kind of research project.

Qualitative data analysis methods

What (exactly) is qualitative data analysis?

To understand qualitative data analysis, we need to first understand qualitative data – so let’s step back and ask the question, “what exactly is qualitative data?”.

Qualitative data refers to pretty much any data that’s “not numbers” . In other words, it’s not the stuff you measure using a fixed scale or complex equipment, nor do you analyse it using complex statistics or mathematics.

So, if it’s not numbers, what is it?

Words, you guessed? Well… sometimes , yes. Qualitative data can, and often does, take the form of interview transcripts, documents and open-ended survey responses – but it can also involve the interpretation of images and videos. In other words, qualitative isn’t just limited to text-based data.

So, how’s that different from quantitative data, you ask?

Simply put, qualitative research focuses on words, descriptions, concepts or ideas – while quantitative research focuses on numbers and statistics . Qualitative research investigates the “softer side” of things to explore and describe , while quantitative research focuses on the “hard numbers”, to measure differences between variables and the relationships between them. If you’re keen to learn more about the differences between qual and quant, we’ve got a detailed post over here .

qualitative data analysis vs quantitative data analysis

So, qualitative analysis is easier than quantitative, right?

Not quite. In many ways, qualitative data can be challenging and time-consuming to analyse and interpret. At the end of your data collection phase (which itself takes a lot of time), you’ll likely have many pages of text-based data or hours upon hours of audio to work through. You might also have subtle nuances of interactions or discussions that have danced around in your mind, or that you scribbled down in messy field notes. All of this needs to work its way into your analysis.

Making sense of all of this is no small task and you shouldn’t underestimate it. Long story short – qualitative analysis can be a lot of work! Of course, quantitative analysis is no piece of cake either, but it’s important to recognise that qualitative analysis still requires a significant investment in terms of time and effort.

Need a helping hand?

methods of data analysis for qualitative research

In this post, we’ll explore qualitative data analysis by looking at some of the most common analysis methods we encounter. We’re not going to cover every possible qualitative method and we’re not going to go into heavy detail – we’re just going to give you the big picture. That said, we will of course includes links to loads of extra resources so that you can learn more about whichever analysis method interests you.

Without further delay, let’s get into it.

The “Big 6” Qualitative Analysis Methods 

There are many different types of qualitative data analysis, all of which serve different purposes and have unique strengths and weaknesses . We’ll start by outlining the analysis methods and then we’ll dive into the details for each.

The 6 most popular methods (or at least the ones we see at Grad Coach) are:

  • Content analysis
  • Narrative analysis
  • Discourse analysis
  • Thematic analysis
  • Grounded theory (GT)
  • Interpretive phenomenological analysis (IPA)

Let’s take a look at each of them…

QDA Method #1: Qualitative Content Analysis

Content analysis is possibly the most common and straightforward QDA method. At the simplest level, content analysis is used to evaluate patterns within a piece of content (for example, words, phrases or images) or across multiple pieces of content or sources of communication. For example, a collection of newspaper articles or political speeches.

With content analysis, you could, for instance, identify the frequency with which an idea is shared or spoken about – like the number of times a Kardashian is mentioned on Twitter. Or you could identify patterns of deeper underlying interpretations – for instance, by identifying phrases or words in tourist pamphlets that highlight India as an ancient country.

Because content analysis can be used in such a wide variety of ways, it’s important to go into your analysis with a very specific question and goal, or you’ll get lost in the fog. With content analysis, you’ll group large amounts of text into codes , summarise these into categories, and possibly even tabulate the data to calculate the frequency of certain concepts or variables. Because of this, content analysis provides a small splash of quantitative thinking within a qualitative method.

Naturally, while content analysis is widely useful, it’s not without its drawbacks . One of the main issues with content analysis is that it can be very time-consuming , as it requires lots of reading and re-reading of the texts. Also, because of its multidimensional focus on both qualitative and quantitative aspects, it is sometimes accused of losing important nuances in communication.

Content analysis also tends to concentrate on a very specific timeline and doesn’t take into account what happened before or after that timeline. This isn’t necessarily a bad thing though – just something to be aware of. So, keep these factors in mind if you’re considering content analysis. Every analysis method has its limitations , so don’t be put off by these – just be aware of them ! If you’re interested in learning more about content analysis, the video below provides a good starting point.

QDA Method #2: Narrative Analysis 

As the name suggests, narrative analysis is all about listening to people telling stories and analysing what that means . Since stories serve a functional purpose of helping us make sense of the world, we can gain insights into the ways that people deal with and make sense of reality by analysing their stories and the ways they’re told.

You could, for example, use narrative analysis to explore whether how something is being said is important. For instance, the narrative of a prisoner trying to justify their crime could provide insight into their view of the world and the justice system. Similarly, analysing the ways entrepreneurs talk about the struggles in their careers or cancer patients telling stories of hope could provide powerful insights into their mindsets and perspectives . Simply put, narrative analysis is about paying attention to the stories that people tell – and more importantly, the way they tell them.

Of course, the narrative approach has its weaknesses , too. Sample sizes are generally quite small due to the time-consuming process of capturing narratives. Because of this, along with the multitude of social and lifestyle factors which can influence a subject, narrative analysis can be quite difficult to reproduce in subsequent research. This means that it’s difficult to test the findings of some of this research.

Similarly, researcher bias can have a strong influence on the results here, so you need to be particularly careful about the potential biases you can bring into your analysis when using this method. Nevertheless, narrative analysis is still a very useful qualitative analysis method – just keep these limitations in mind and be careful not to draw broad conclusions . If you’re keen to learn more about narrative analysis, the video below provides a great introduction to this qualitative analysis method.

QDA Method #3: Discourse Analysis 

Discourse is simply a fancy word for written or spoken language or debate . So, discourse analysis is all about analysing language within its social context. In other words, analysing language – such as a conversation, a speech, etc – within the culture and society it takes place. For example, you could analyse how a janitor speaks to a CEO, or how politicians speak about terrorism.

To truly understand these conversations or speeches, the culture and history of those involved in the communication are important factors to consider. For example, a janitor might speak more casually with a CEO in a company that emphasises equality among workers. Similarly, a politician might speak more about terrorism if there was a recent terrorist incident in the country.

So, as you can see, by using discourse analysis, you can identify how culture , history or power dynamics (to name a few) have an effect on the way concepts are spoken about. So, if your research aims and objectives involve understanding culture or power dynamics, discourse analysis can be a powerful method.

Because there are many social influences in terms of how we speak to each other, the potential use of discourse analysis is vast . Of course, this also means it’s important to have a very specific research question (or questions) in mind when analysing your data and looking for patterns and themes, or you might land up going down a winding rabbit hole.

Discourse analysis can also be very time-consuming  as you need to sample the data to the point of saturation – in other words, until no new information and insights emerge. But this is, of course, part of what makes discourse analysis such a powerful technique. So, keep these factors in mind when considering this QDA method. Again, if you’re keen to learn more, the video below presents a good starting point.

QDA Method #4: Thematic Analysis

Thematic analysis looks at patterns of meaning in a data set – for example, a set of interviews or focus group transcripts. But what exactly does that… mean? Well, a thematic analysis takes bodies of data (which are often quite large) and groups them according to similarities – in other words, themes . These themes help us make sense of the content and derive meaning from it.

Let’s take a look at an example.

With thematic analysis, you could analyse 100 online reviews of a popular sushi restaurant to find out what patrons think about the place. By reviewing the data, you would then identify the themes that crop up repeatedly within the data – for example, “fresh ingredients” or “friendly wait staff”.

So, as you can see, thematic analysis can be pretty useful for finding out about people’s experiences , views, and opinions . Therefore, if your research aims and objectives involve understanding people’s experience or view of something, thematic analysis can be a great choice.

Since thematic analysis is a bit of an exploratory process, it’s not unusual for your research questions to develop , or even change as you progress through the analysis. While this is somewhat natural in exploratory research, it can also be seen as a disadvantage as it means that data needs to be re-reviewed each time a research question is adjusted. In other words, thematic analysis can be quite time-consuming – but for a good reason. So, keep this in mind if you choose to use thematic analysis for your project and budget extra time for unexpected adjustments.

Thematic analysis takes bodies of data and groups them according to similarities (themes), which help us make sense of the content.

QDA Method #5: Grounded theory (GT) 

Grounded theory is a powerful qualitative analysis method where the intention is to create a new theory (or theories) using the data at hand, through a series of “ tests ” and “ revisions ”. Strictly speaking, GT is more a research design type than an analysis method, but we’ve included it here as it’s often referred to as a method.

What’s most important with grounded theory is that you go into the analysis with an open mind and let the data speak for itself – rather than dragging existing hypotheses or theories into your analysis. In other words, your analysis must develop from the ground up (hence the name). 

Let’s look at an example of GT in action.

Assume you’re interested in developing a theory about what factors influence students to watch a YouTube video about qualitative analysis. Using Grounded theory , you’d start with this general overarching question about the given population (i.e., graduate students). First, you’d approach a small sample – for example, five graduate students in a department at a university. Ideally, this sample would be reasonably representative of the broader population. You’d interview these students to identify what factors lead them to watch the video.

After analysing the interview data, a general pattern could emerge. For example, you might notice that graduate students are more likely to read a post about qualitative methods if they are just starting on their dissertation journey, or if they have an upcoming test about research methods.

From here, you’ll look for another small sample – for example, five more graduate students in a different department – and see whether this pattern holds true for them. If not, you’ll look for commonalities and adapt your theory accordingly. As this process continues, the theory would develop . As we mentioned earlier, what’s important with grounded theory is that the theory develops from the data – not from some preconceived idea.

So, what are the drawbacks of grounded theory? Well, some argue that there’s a tricky circularity to grounded theory. For it to work, in principle, you should know as little as possible regarding the research question and population, so that you reduce the bias in your interpretation. However, in many circumstances, it’s also thought to be unwise to approach a research question without knowledge of the current literature . In other words, it’s a bit of a “chicken or the egg” situation.

Regardless, grounded theory remains a popular (and powerful) option. Naturally, it’s a very useful method when you’re researching a topic that is completely new or has very little existing research about it, as it allows you to start from scratch and work your way from the ground up .

Grounded theory is used to create a new theory (or theories) by using the data at hand, as opposed to existing theories and frameworks.

QDA Method #6:   Interpretive Phenomenological Analysis (IPA)

Interpretive. Phenomenological. Analysis. IPA . Try saying that three times fast…

Let’s just stick with IPA, okay?

IPA is designed to help you understand the personal experiences of a subject (for example, a person or group of people) concerning a major life event, an experience or a situation . This event or experience is the “phenomenon” that makes up the “P” in IPA. Such phenomena may range from relatively common events – such as motherhood, or being involved in a car accident – to those which are extremely rare – for example, someone’s personal experience in a refugee camp. So, IPA is a great choice if your research involves analysing people’s personal experiences of something that happened to them.

It’s important to remember that IPA is subject – centred . In other words, it’s focused on the experiencer . This means that, while you’ll likely use a coding system to identify commonalities, it’s important not to lose the depth of experience or meaning by trying to reduce everything to codes. Also, keep in mind that since your sample size will generally be very small with IPA, you often won’t be able to draw broad conclusions about the generalisability of your findings. But that’s okay as long as it aligns with your research aims and objectives.

Another thing to be aware of with IPA is personal bias . While researcher bias can creep into all forms of research, self-awareness is critically important with IPA, as it can have a major impact on the results. For example, a researcher who was a victim of a crime himself could insert his own feelings of frustration and anger into the way he interprets the experience of someone who was kidnapped. So, if you’re going to undertake IPA, you need to be very self-aware or you could muddy the analysis.

IPA can help you understand the personal experiences of a person or group concerning a major life event, an experience or a situation.

How to choose the right analysis method

In light of all of the qualitative analysis methods we’ve covered so far, you’re probably asking yourself the question, “ How do I choose the right one? ”

Much like all the other methodological decisions you’ll need to make, selecting the right qualitative analysis method largely depends on your research aims, objectives and questions . In other words, the best tool for the job depends on what you’re trying to build. For example:

  • Perhaps your research aims to analyse the use of words and what they reveal about the intention of the storyteller and the cultural context of the time.
  • Perhaps your research aims to develop an understanding of the unique personal experiences of people that have experienced a certain event, or
  • Perhaps your research aims to develop insight regarding the influence of a certain culture on its members.

As you can probably see, each of these research aims are distinctly different , and therefore different analysis methods would be suitable for each one. For example, narrative analysis would likely be a good option for the first aim, while grounded theory wouldn’t be as relevant. 

It’s also important to remember that each method has its own set of strengths, weaknesses and general limitations. No single analysis method is perfect . So, depending on the nature of your research, it may make sense to adopt more than one method (this is called triangulation ). Keep in mind though that this will of course be quite time-consuming.

As we’ve seen, all of the qualitative analysis methods we’ve discussed make use of coding and theme-generating techniques, but the intent and approach of each analysis method differ quite substantially. So, it’s very important to come into your research with a clear intention before you decide which analysis method (or methods) to use.

Start by reviewing your research aims , objectives and research questions to assess what exactly you’re trying to find out – then select a qualitative analysis method that fits. Never pick a method just because you like it or have experience using it – your analysis method (or methods) must align with your broader research aims and objectives.

No single analysis method is perfect, so it can often make sense to adopt more than one  method (this is called triangulation).

Let’s recap on QDA methods…

In this post, we looked at six popular qualitative data analysis methods:

  • First, we looked at content analysis , a straightforward method that blends a little bit of quant into a primarily qualitative analysis.
  • Then we looked at narrative analysis , which is about analysing how stories are told.
  • Next up was discourse analysis – which is about analysing conversations and interactions.
  • Then we moved on to thematic analysis – which is about identifying themes and patterns.
  • From there, we went south with grounded theory – which is about starting from scratch with a specific question and using the data alone to build a theory in response to that question.
  • And finally, we looked at IPA – which is about understanding people’s unique experiences of a phenomenon.

Of course, these aren’t the only options when it comes to qualitative data analysis, but they’re a great starting point if you’re dipping your toes into qualitative research for the first time.

If you’re still feeling a bit confused, consider our private coaching service , where we hold your hand through the research process to help you develop your best work.

methods of data analysis for qualitative research

Psst… there’s more (for free)

This post is part of our dissertation mini-course, which covers everything you need to get started with your dissertation, thesis or research project. 

You Might Also Like:

Research design for qualitative and quantitative studies

84 Comments

Richard N

This has been very helpful. Thank you.

netaji

Thank you madam,

Mariam Jaiyeola

Thank you so much for this information

Nzube

I wonder it so clear for understand and good for me. can I ask additional query?

Lee

Very insightful and useful

Susan Nakaweesi

Good work done with clear explanations. Thank you.

Titilayo

Thanks so much for the write-up, it’s really good.

Hemantha Gunasekara

Thanks madam . It is very important .

Gumathandra

thank you very good

Pramod Bahulekar

This has been very well explained in simple language . It is useful even for a new researcher.

Derek Jansen

Great to hear that. Good luck with your qualitative data analysis, Pramod!

Adam Zahir

This is very useful information. And it was very a clear language structured presentation. Thanks a lot.

Golit,F.

Thank you so much.

Emmanuel

very informative sequential presentation

Shahzada

Precise explanation of method.

Alyssa

Hi, may we use 2 data analysis methods in our qualitative research?

Thanks for your comment. Most commonly, one would use one type of analysis method, but it depends on your research aims and objectives.

Dr. Manju Pandey

You explained it in very simple language, everyone can understand it. Thanks so much.

Phillip

Thank you very much, this is very helpful. It has been explained in a very simple manner that even a layman understands

Anne

Thank nicely explained can I ask is Qualitative content analysis the same as thematic analysis?

Thanks for your comment. No, QCA and thematic are two different types of analysis. This article might help clarify – https://onlinelibrary.wiley.com/doi/10.1111/nhs.12048

Rev. Osadare K . J

This is my first time to come across a well explained data analysis. so helpful.

Tina King

I have thoroughly enjoyed your explanation of the six qualitative analysis methods. This is very helpful. Thank you!

Bromie

Thank you very much, this is well explained and useful

udayangani

i need a citation of your book.

khutsafalo

Thanks a lot , remarkable indeed, enlighting to the best

jas

Hi Derek, What other theories/methods would you recommend when the data is a whole speech?

M

Keep writing useful artikel.

Adane

It is important concept about QDA and also the way to express is easily understandable, so thanks for all.

Carl Benecke

Thank you, this is well explained and very useful.

Ngwisa

Very helpful .Thanks.

Hajra Aman

Hi there! Very well explained. Simple but very useful style of writing. Please provide the citation of the text. warm regards

Hillary Mophethe

The session was very helpful and insightful. Thank you

This was very helpful and insightful. Easy to read and understand

Catherine

As a professional academic writer, this has been so informative and educative. Keep up the good work Grad Coach you are unmatched with quality content for sure.

Keep up the good work Grad Coach you are unmatched with quality content for sure.

Abdulkerim

Its Great and help me the most. A Million Thanks you Dr.

Emanuela

It is a very nice work

Noble Naade

Very insightful. Please, which of this approach could be used for a research that one is trying to elicit students’ misconceptions in a particular concept ?

Karen

This is Amazing and well explained, thanks

amirhossein

great overview

Tebogo

What do we call a research data analysis method that one use to advise or determining the best accounting tool or techniques that should be adopted in a company.

Catherine Shimechero

Informative video, explained in a clear and simple way. Kudos

Van Hmung

Waoo! I have chosen method wrong for my data analysis. But I can revise my work according to this guide. Thank you so much for this helpful lecture.

BRIAN ONYANGO MWAGA

This has been very helpful. It gave me a good view of my research objectives and how to choose the best method. Thematic analysis it is.

Livhuwani Reineth

Very helpful indeed. Thanku so much for the insight.

Storm Erlank

This was incredibly helpful.

Jack Kanas

Very helpful.

catherine

very educative

Wan Roslina

Nicely written especially for novice academic researchers like me! Thank you.

Talash

choosing a right method for a paper is always a hard job for a student, this is a useful information, but it would be more useful personally for me, if the author provide me with a little bit more information about the data analysis techniques in type of explanatory research. Can we use qualitative content analysis technique for explanatory research ? or what is the suitable data analysis method for explanatory research in social studies?

ramesh

that was very helpful for me. because these details are so important to my research. thank you very much

Kumsa Desisa

I learnt a lot. Thank you

Tesfa NT

Relevant and Informative, thanks !

norma

Well-planned and organized, thanks much! 🙂

Dr. Jacob Lubuva

I have reviewed qualitative data analysis in a simplest way possible. The content will highly be useful for developing my book on qualitative data analysis methods. Cheers!

Nyi Nyi Lwin

Clear explanation on qualitative and how about Case study

Ogobuchi Otuu

This was helpful. Thank you

Alicia

This was really of great assistance, it was just the right information needed. Explanation very clear and follow.

Wow, Thanks for making my life easy

C. U

This was helpful thanks .

Dr. Alina Atif

Very helpful…. clear and written in an easily understandable manner. Thank you.

Herb

This was so helpful as it was easy to understand. I’m a new to research thank you so much.

cissy

so educative…. but Ijust want to know which method is coding of the qualitative or tallying done?

Ayo

Thank you for the great content, I have learnt a lot. So helpful

Tesfaye

precise and clear presentation with simple language and thank you for that.

nneheng

very informative content, thank you.

Oscar Kuebutornye

You guys are amazing on YouTube on this platform. Your teachings are great, educative, and informative. kudos!

NG

Brilliant Delivery. You made a complex subject seem so easy. Well done.

Ankit Kumar

Beautifully explained.

Thanks a lot

Kidada Owen-Browne

Is there a video the captures the practical process of coding using automated applications?

Thanks for the comment. We don’t recommend using automated applications for coding, as they are not sufficiently accurate in our experience.

Mathewos Damtew

content analysis can be qualitative research?

Hend

THANK YOU VERY MUCH.

Dev get

Thank you very much for such a wonderful content

Kassahun Aman

do you have any material on Data collection

Prince .S. mpofu

What a powerful explanation of the QDA methods. Thank you.

Kassahun

Great explanation both written and Video. i have been using of it on a day to day working of my thesis project in accounting and finance. Thank you very much for your support.

BORA SAMWELI MATUTULI

very helpful, thank you so much

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

methods of data analysis for qualitative research

Home Market Research

Qualitative Data Analysis: What is it, Methods + Examples

Explore qualitative data analysis with diverse methods and real-world examples. Uncover the nuances of human experiences with this guide.

In a world rich with information and narrative, understanding the deeper layers of human experiences requires a unique vision that goes beyond numbers and figures. This is where the power of qualitative data analysis comes to light.

In this blog, we’ll learn about qualitative data analysis, explore its methods, and provide real-life examples showcasing its power in uncovering insights.

What is Qualitative Data Analysis?

Qualitative data analysis is a systematic process of examining non-numerical data to extract meaning, patterns, and insights.

In contrast to quantitative analysis, which focuses on numbers and statistical metrics, the qualitative study focuses on the qualitative aspects of data, such as text, images, audio, and videos. It seeks to understand every aspect of human experiences, perceptions, and behaviors by examining the data’s richness.

Companies frequently conduct this analysis on customer feedback. You can collect qualitative data from reviews, complaints, chat messages, interactions with support centers, customer interviews, case notes, or even social media comments. This kind of data holds the key to understanding customer sentiments and preferences in a way that goes beyond mere numbers.

Importance of Qualitative Data Analysis

Qualitative data analysis plays a crucial role in your research and decision-making process across various disciplines. Let’s explore some key reasons that underline the significance of this analysis:

In-Depth Understanding

It enables you to explore complex and nuanced aspects of a phenomenon, delving into the ‘how’ and ‘why’ questions. This method provides you with a deeper understanding of human behavior, experiences, and contexts that quantitative approaches might not capture fully.

Contextual Insight

You can use this analysis to give context to numerical data. It will help you understand the circumstances and conditions that influence participants’ thoughts, feelings, and actions. This contextual insight becomes essential for generating comprehensive explanations.

Theory Development

You can generate or refine hypotheses via qualitative data analysis. As you analyze the data attentively, you can form hypotheses, concepts, and frameworks that will drive your future research and contribute to theoretical advances.

Participant Perspectives

When performing qualitative research, you can highlight participant voices and opinions. This approach is especially useful for understanding marginalized or underrepresented people, as it allows them to communicate their experiences and points of view.

Exploratory Research

The analysis is frequently used at the exploratory stage of your project. It assists you in identifying important variables, developing research questions, and designing quantitative studies that will follow.

Types of Qualitative Data

When conducting qualitative research, you can use several qualitative data collection methods, and here you will come across many sorts of qualitative data that can provide you with unique insights into your study topic. These data kinds add new views and angles to your understanding and analysis.

Interviews and Focus Groups

Interviews and focus groups will be among your key methods for gathering qualitative data. Interviews are one-on-one talks in which participants can freely share their thoughts, experiences, and opinions.

Focus groups, on the other hand, are discussions in which members interact with one another, resulting in dynamic exchanges of ideas. Both methods provide rich qualitative data and direct access to participant perspectives.

Observations and Field Notes

Observations and field notes are another useful sort of qualitative data. You can immerse yourself in the research environment through direct observation, carefully documenting behaviors, interactions, and contextual factors.

These observations will be recorded in your field notes, providing a complete picture of the environment and the behaviors you’re researching. This data type is especially important for comprehending behavior in their natural setting.

Textual and Visual Data

Textual and visual data include a wide range of resources that can be qualitatively analyzed. Documents, written narratives, and transcripts from various sources, such as interviews or speeches, are examples of textual data.

Photographs, films, and even artwork provide a visual layer to your research. These forms of data allow you to investigate what is spoken and the underlying emotions, details, and symbols expressed by language or pictures.

When to Choose Qualitative Data Analysis over Quantitative Data Analysis

As you begin your research journey, understanding why the analysis of qualitative data is important will guide your approach to understanding complex events. If you analyze qualitative data, it will provide new insights that complement quantitative methodologies, which will give you a broader understanding of your study topic.

It is critical to know when to use qualitative analysis over quantitative procedures. You can prefer qualitative data analysis when:

  • Complexity Reigns: When your research questions involve deep human experiences, motivations, or emotions, qualitative research excels at revealing these complexities.
  • Exploration is Key: Qualitative analysis is ideal for exploratory research. It will assist you in understanding a new or poorly understood topic before formulating quantitative hypotheses.
  • Context Matters: If you want to understand how context affects behaviors or results, qualitative data analysis provides the depth needed to grasp these relationships.
  • Unanticipated Findings: When your study provides surprising new viewpoints or ideas, qualitative analysis helps you to delve deeply into these emerging themes.
  • Subjective Interpretation is Vital: When it comes to understanding people’s subjective experiences and interpretations, qualitative data analysis is the way to go.

You can make informed decisions regarding the right approach for your research objectives if you understand the importance of qualitative analysis and recognize the situations where it shines.

Qualitative Data Analysis Methods and Examples

Exploring various qualitative data analysis methods will provide you with a wide collection for making sense of your research findings. Once the data has been collected, you can choose from several analysis methods based on your research objectives and the data type you’ve collected.

There are five main methods for analyzing qualitative data. Each method takes a distinct approach to identifying patterns, themes, and insights within your qualitative data. They are:

Method 1: Content Analysis

Content analysis is a methodical technique for analyzing textual or visual data in a structured manner. In this method, you will categorize qualitative data by splitting it into manageable pieces and assigning the manual coding process to these units.

As you go, you’ll notice ongoing codes and designs that will allow you to conclude the content. This method is very beneficial for detecting common ideas, concepts, or themes in your data without losing the context.

Steps to Do Content Analysis

Follow these steps when conducting content analysis:

  • Collect and Immerse: Begin by collecting the necessary textual or visual data. Immerse yourself in this data to fully understand its content, context, and complexities.
  • Assign Codes and Categories: Assign codes to relevant data sections that systematically represent major ideas or themes. Arrange comparable codes into groups that cover the major themes.
  • Analyze and Interpret: Develop a structured framework from the categories and codes. Then, evaluate the data in the context of your research question, investigate relationships between categories, discover patterns, and draw meaning from these connections.

Benefits & Challenges

There are various advantages to using content analysis:

  • Structured Approach: It offers a systematic approach to dealing with large data sets and ensures consistency throughout the research.
  • Objective Insights: This method promotes objectivity, which helps to reduce potential biases in your study.
  • Pattern Discovery: Content analysis can help uncover hidden trends, themes, and patterns that are not always obvious.
  • Versatility: You can apply content analysis to various data formats, including text, internet content, images, etc.

However, keep in mind the challenges that arise:

  • Subjectivity: Even with the best attempts, a certain bias may remain in coding and interpretation.
  • Complexity: Analyzing huge data sets requires time and great attention to detail.
  • Contextual Nuances: Content analysis may not capture all of the contextual richness that qualitative data analysis highlights.

Example of Content Analysis

Suppose you’re conducting market research and looking at customer feedback on a product. As you collect relevant data and analyze feedback, you’ll see repeating codes like “price,” “quality,” “customer service,” and “features.” These codes are organized into categories such as “positive reviews,” “negative reviews,” and “suggestions for improvement.”

According to your findings, themes such as “price” and “customer service” stand out and show that pricing and customer service greatly impact customer satisfaction. This example highlights the power of content analysis for obtaining significant insights from large textual data collections.

Method 2: Thematic Analysis

Thematic analysis is a well-structured procedure for identifying and analyzing recurring themes in your data. As you become more engaged in the data, you’ll generate codes or short labels representing key concepts. These codes are then organized into themes, providing a consistent framework for organizing and comprehending the substance of the data.

The analysis allows you to organize complex narratives and perspectives into meaningful categories, which will allow you to identify connections and patterns that may not be visible at first.

Steps to Do Thematic Analysis

Follow these steps when conducting a thematic analysis:

  • Code and Group: Start by thoroughly examining the data and giving initial codes that identify the segments. To create initial themes, combine relevant codes.
  • Code and Group: Begin by engaging yourself in the data, assigning first codes to notable segments. To construct basic themes, group comparable codes together.
  • Analyze and Report: Analyze the data within each theme to derive relevant insights. Organize the topics into a consistent structure and explain your findings, along with data extracts that represent each theme.

Thematic analysis has various benefits:

  • Structured Exploration: It is a method for identifying patterns and themes in complex qualitative data.
  • Comprehensive knowledge: Thematic analysis promotes an in-depth understanding of the complications and meanings of the data.
  • Application Flexibility: This method can be customized to various research situations and data kinds.

However, challenges may arise, such as:

  • Interpretive Nature: Interpreting qualitative data in thematic analysis is vital, and it is critical to manage researcher bias.
  • Time-consuming: The study can be time-consuming, especially with large data sets.
  • Subjectivity: The selection of codes and topics might be subjective.

Example of Thematic Analysis

Assume you’re conducting a thematic analysis on job satisfaction interviews. Following your immersion in the data, you assign initial codes such as “work-life balance,” “career growth,” and “colleague relationships.” As you organize these codes, you’ll notice themes develop, such as “Factors Influencing Job Satisfaction” and “Impact on Work Engagement.”

Further investigation reveals the tales and experiences included within these themes and provides insights into how various elements influence job satisfaction. This example demonstrates how thematic analysis can reveal meaningful patterns and insights in qualitative data.

Method 3: Narrative Analysis

The narrative analysis involves the narratives that people share. You’ll investigate the histories in your data, looking at how stories are created and the meanings they express. This method is excellent for learning how people make sense of their experiences through narrative.

Steps to Do Narrative Analysis

The following steps are involved in narrative analysis:

  • Gather and Analyze: Start by collecting narratives, such as first-person tales, interviews, or written accounts. Analyze the stories, focusing on the plot, feelings, and characters.
  • Find Themes: Look for recurring themes or patterns in various narratives. Think about the similarities and differences between these topics and personal experiences.
  • Interpret and Extract Insights: Contextualize the narratives within their larger context. Accept the subjective nature of each narrative and analyze the narrator’s voice and style. Extract insights from the tales by diving into the emotions, motivations, and implications communicated by the stories.

There are various advantages to narrative analysis:

  • Deep Exploration: It lets you look deeply into people’s personal experiences and perspectives.
  • Human-Centered: This method prioritizes the human perspective, allowing individuals to express themselves.

However, difficulties may arise, such as:

  • Interpretive Complexity: Analyzing narratives requires dealing with the complexities of meaning and interpretation.
  • Time-consuming: Because of the richness and complexities of tales, working with them can be time-consuming.

Example of Narrative Analysis

Assume you’re conducting narrative analysis on refugee interviews. As you read the stories, you’ll notice common themes of toughness, loss, and hope. The narratives provide insight into the obstacles that refugees face, their strengths, and the dreams that guide them.

The analysis can provide a deeper insight into the refugees’ experiences and the broader social context they navigate by examining the narratives’ emotional subtleties and underlying meanings. This example highlights how narrative analysis can reveal important insights into human stories.

Method 4: Grounded Theory Analysis

Grounded theory analysis is an iterative and systematic approach that allows you to create theories directly from data without being limited by pre-existing hypotheses. With an open mind, you collect data and generate early codes and labels that capture essential ideas or concepts within the data.

As you progress, you refine these codes and increasingly connect them, eventually developing a theory based on the data. Grounded theory analysis is a dynamic process for developing new insights and hypotheses based on details in your data.

Steps to Do Grounded Theory Analysis

Grounded theory analysis requires the following steps:

  • Initial Coding: First, immerse yourself in the data, producing initial codes that represent major concepts or patterns.
  • Categorize and Connect: Using axial coding, organize the initial codes, which establish relationships and connections between topics.
  • Build the Theory: Focus on creating a core category that connects the codes and themes. Regularly refine the theory by comparing and integrating new data, ensuring that it evolves organically from the data.

Grounded theory analysis has various benefits:

  • Theory Generation: It provides a one-of-a-kind opportunity to generate hypotheses straight from data and promotes new insights.
  • In-depth Understanding: The analysis allows you to deeply analyze the data and reveal complex relationships and patterns.
  • Flexible Process: This method is customizable and ongoing, which allows you to enhance your research as you collect additional data.

However, challenges might arise with:

  • Time and Resources: Because grounded theory analysis is a continuous process, it requires a large commitment of time and resources.
  • Theoretical Development: Creating a grounded theory involves a thorough understanding of qualitative data analysis software and theoretical concepts.
  • Interpretation of Complexity: Interpreting and incorporating a newly developed theory into existing literature can be intellectually hard.

Example of Grounded Theory Analysis

Assume you’re performing a grounded theory analysis on workplace collaboration interviews. As you open code the data, you will discover notions such as “communication barriers,” “team dynamics,” and “leadership roles.” Axial coding demonstrates links between these notions, emphasizing the significance of efficient communication in developing collaboration.

You create the core “Integrated Communication Strategies” category through selective coding, which unifies new topics.

This theory-driven category serves as the framework for understanding how numerous aspects contribute to effective team collaboration. This example shows how grounded theory analysis allows you to generate a theory directly from the inherent nature of the data.

Method 5: Discourse Analysis

Discourse analysis focuses on language and communication. You’ll look at how language produces meaning and how it reflects power relations, identities, and cultural influences. This strategy examines what is said and how it is said; the words, phrasing, and larger context of communication.

The analysis is precious when investigating power dynamics, identities, and cultural influences encoded in language. By evaluating the language used in your data, you can identify underlying assumptions, cultural standards, and how individuals negotiate meaning through communication.

Steps to Do Discourse Analysis

Conducting discourse analysis entails the following steps:

  • Select Discourse: For analysis, choose language-based data such as texts, speeches, or media content.
  • Analyze Language: Immerse yourself in the conversation, examining language choices, metaphors, and underlying assumptions.
  • Discover Patterns: Recognize the dialogue’s reoccurring themes, ideologies, and power dynamics. To fully understand the effects of these patterns, put them in their larger context.

There are various advantages of using discourse analysis:

  • Understanding Language: It provides an extensive understanding of how language builds meaning and influences perceptions.
  • Uncovering Power Dynamics: The analysis reveals how power dynamics appear via language.
  • Cultural Insights: This method identifies cultural norms, beliefs, and ideologies stored in communication.

However, the following challenges may arise:

  • Complexity of Interpretation: Language analysis involves navigating multiple levels of nuance and interpretation.
  • Subjectivity: Interpretation can be subjective, so controlling researcher bias is important.
  • Time-Intensive: Discourse analysis can take a lot of time because careful linguistic study is required in this analysis.

Example of Discourse Analysis

Consider doing discourse analysis on media coverage of a political event. You notice repeating linguistic patterns in news articles that depict the event as a conflict between opposing parties. Through deconstruction, you can expose how this framing supports particular ideologies and power relations.

You can illustrate how language choices influence public perceptions and contribute to building the narrative around the event by analyzing the speech within the broader political and social context. This example shows how discourse analysis can reveal hidden power dynamics and cultural influences on communication.

How to do Qualitative Data Analysis with the QuestionPro Research suite?

QuestionPro is a popular survey and research platform that offers tools for collecting and analyzing qualitative and quantitative data. Follow these general steps for conducting qualitative data analysis using the QuestionPro Research Suite:

  • Collect Qualitative Data: Set up your survey to capture qualitative responses. It might involve open-ended questions, text boxes, or comment sections where participants can provide detailed responses.
  • Export Qualitative Responses: Export the responses once you’ve collected qualitative data through your survey. QuestionPro typically allows you to export survey data in various formats, such as Excel or CSV.
  • Prepare Data for Analysis: Review the exported data and clean it if necessary. Remove irrelevant or duplicate entries to ensure your data is ready for analysis.
  • Code and Categorize Responses: Segment and label data, letting new patterns emerge naturally, then develop categories through axial coding to structure the analysis.
  • Identify Themes: Analyze the coded responses to identify recurring themes, patterns, and insights. Look for similarities and differences in participants’ responses.
  • Generate Reports and Visualizations: Utilize the reporting features of QuestionPro to create visualizations, charts, and graphs that help communicate the themes and findings from your qualitative research.
  • Interpret and Draw Conclusions: Interpret the themes and patterns you’ve identified in the qualitative data. Consider how these findings answer your research questions or provide insights into your study topic.
  • Integrate with Quantitative Data (if applicable): If you’re also conducting quantitative research using QuestionPro, consider integrating your qualitative findings with quantitative results to provide a more comprehensive understanding.

Qualitative data analysis is vital in uncovering various human experiences, views, and stories. If you’re ready to transform your research journey and apply the power of qualitative analysis, now is the moment to do it. Book a demo with QuestionPro today and begin your journey of exploration.

LEARN MORE         FREE TRIAL

MORE LIKE THIS

customer loyalty software

10 Top Customer Loyalty Software to Boost Your Business

Mar 25, 2024

anonymous employee feedback tools

Top 13 Anonymous Employee Feedback Tools for 2024

idea management software

Unlocking Creativity With 10 Top Idea Management Software

Mar 23, 2024

website optimization tools

20 Best Website Optimization Tools to Improve Your Website

Mar 22, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence
  • AI & NLP
  • Churn & Loyalty
  • Customer Experience
  • Customer Journeys
  • Customer Metrics
  • Feedback Analysis
  • Product Experience
  • Product Updates
  • Sentiment Analysis
  • Surveys & Feedback Collection
  • Try Thematic

Welcome to the community

methods of data analysis for qualitative research

Qualitative Data Analysis: Step-by-Step Guide (Manual vs. Automatic)

When we conduct qualitative methods of research, need to explain changes in metrics or understand people's opinions, we always turn to qualitative data. Qualitative data is typically generated through:

  • Interview transcripts
  • Surveys with open-ended questions
  • Contact center transcripts
  • Texts and documents
  • Audio and video recordings
  • Observational notes

Compared to quantitative data, which captures structured information, qualitative data is unstructured and has more depth. It can answer our questions, can help formulate hypotheses and build understanding.

It's important to understand the differences between quantitative data & qualitative data . But unfortunately, analyzing qualitative data is difficult. While tools like Excel, Tableau and PowerBI crunch and visualize quantitative data with ease, there are a limited number of mainstream tools for analyzing qualitative data . The majority of qualitative data analysis still happens manually.

That said, there are two new trends that are changing this. First, there are advances in natural language processing (NLP) which is focused on understanding human language. Second, there is an explosion of user-friendly software designed for both researchers and businesses. Both help automate the qualitative data analysis process.

In this post we want to teach you how to conduct a successful qualitative data analysis. There are two primary qualitative data analysis methods; manual & automatic. We will teach you how to conduct the analysis manually, and also, automatically using software solutions powered by NLP. We’ll guide you through the steps to conduct a manual analysis, and look at what is involved and the role technology can play in automating this process.

More businesses are switching to fully-automated analysis of qualitative customer data because it is cheaper, faster, and just as accurate. Primarily, businesses purchase subscriptions to feedback analytics platforms so that they can understand customer pain points and sentiment.

Overwhelming quantity of feedback

We’ll take you through 5 steps to conduct a successful qualitative data analysis. Within each step we will highlight the key difference between the manual, and automated approach of qualitative researchers. Here's an overview of the steps:

The 5 steps to doing qualitative data analysis

  • Gathering and collecting your qualitative data
  • Organizing and connecting into your qualitative data
  • Coding your qualitative data
  • Analyzing the qualitative data for insights
  • Reporting on the insights derived from your analysis

What is Qualitative Data Analysis?

Qualitative data analysis is a process of gathering, structuring and interpreting qualitative data to understand what it represents.

Qualitative data is non-numerical and unstructured. Qualitative data generally refers to text, such as open-ended responses to survey questions or user interviews, but also includes audio, photos and video.

Businesses often perform qualitative data analysis on customer feedback. And within this context, qualitative data generally refers to verbatim text data collected from sources such as reviews, complaints, chat messages, support centre interactions, customer interviews, case notes or social media comments.

How is qualitative data analysis different from quantitative data analysis?

Understanding the differences between quantitative & qualitative data is important. When it comes to analyzing data, Qualitative Data Analysis serves a very different role to Quantitative Data Analysis. But what sets them apart?

Qualitative Data Analysis dives into the stories hidden in non-numerical data such as interviews, open-ended survey answers, or notes from observations. It uncovers the ‘whys’ and ‘hows’ giving a deep understanding of people’s experiences and emotions.

Quantitative Data Analysis on the other hand deals with numerical data, using statistics to measure differences, identify preferred options, and pinpoint root causes of issues.  It steps back to address questions like "how many" or "what percentage" to offer broad insights we can apply to larger groups.

In short, Qualitative Data Analysis is like a microscope,  helping us understand specific detail. Quantitative Data Analysis is like the telescope, giving us a broader perspective. Both are important, working together to decode data for different objectives.

Qualitative Data Analysis methods

Once all the data has been captured, there are a variety of analysis techniques available and the choice is determined by your specific research objectives and the kind of data you’ve gathered.  Common qualitative data analysis methods include:

Content Analysis

This is a popular approach to qualitative data analysis. Other qualitative analysis techniques may fit within the broad scope of content analysis. Thematic analysis is a part of the content analysis.  Content analysis is used to identify the patterns that emerge from text, by grouping content into words, concepts, and themes. Content analysis is useful to quantify the relationship between all of the grouped content. The Columbia School of Public Health has a detailed breakdown of content analysis .

Narrative Analysis

Narrative analysis focuses on the stories people tell and the language they use to make sense of them.  It is particularly useful in qualitative research methods where customer stories are used to get a deep understanding of customers’ perspectives on a specific issue. A narrative analysis might enable us to summarize the outcomes of a focused case study.

Discourse Analysis

Discourse analysis is used to get a thorough understanding of the political, cultural and power dynamics that exist in specific situations.  The focus of discourse analysis here is on the way people express themselves in different social contexts. Discourse analysis is commonly used by brand strategists who hope to understand why a group of people feel the way they do about a brand or product.

Thematic Analysis

Thematic analysis is used to deduce the meaning behind the words people use. This is accomplished by discovering repeating themes in text. These meaningful themes reveal key insights into data and can be quantified, particularly when paired with sentiment analysis . Often, the outcome of thematic analysis is a code frame that captures themes in terms of codes, also called categories. So the process of thematic analysis is also referred to as “coding”. A common use-case for thematic analysis in companies is analysis of customer feedback.

Grounded Theory

Grounded theory is a useful approach when little is known about a subject. Grounded theory starts by formulating a theory around a single data case. This means that the theory is “grounded”. Grounded theory analysis is based on actual data, and not entirely speculative. Then additional cases can be examined to see if they are relevant and can add to the original grounded theory.

Methods of qualitative data analysis; approaches and techniques to qualitative data analysis

Challenges of Qualitative Data Analysis

While Qualitative Data Analysis offers rich insights, it comes with its challenges. Each unique QDA method has its unique hurdles. Let’s take a look at the challenges researchers and analysts might face, depending on the chosen method.

  • Time and Effort (Narrative Analysis): Narrative analysis, which focuses on personal stories, demands patience. Sifting through lengthy narratives to find meaningful insights can be time-consuming, requires dedicated effort.
  • Being Objective (Grounded Theory): Grounded theory, building theories from data, faces the challenges of personal biases. Staying objective while interpreting data is crucial, ensuring conclusions are rooted in the data itself.
  • Complexity (Thematic Analysis): Thematic analysis involves identifying themes within data, a process that can be intricate. Categorizing and understanding themes can be complex, especially when each piece of data varies in context and structure. Thematic Analysis software can simplify this process.
  • Generalizing Findings (Narrative Analysis): Narrative analysis, dealing with individual stories, makes drawing broad challenging. Extending findings from a single narrative to a broader context requires careful consideration.
  • Managing Data (Thematic Analysis): Thematic analysis involves organizing and managing vast amounts of unstructured data, like interview transcripts. Managing this can be a hefty task, requiring effective data management strategies.
  • Skill Level (Grounded Theory): Grounded theory demands specific skills to build theories from the ground up. Finding or training analysts with these skills poses a challenge, requiring investment in building expertise.

Benefits of qualitative data analysis

Qualitative Data Analysis (QDA) is like a versatile toolkit, offering a tailored approach to understanding your data. The benefits it offers are as diverse as the methods. Let’s explore why choosing the right method matters.

  • Tailored Methods for Specific Needs: QDA isn't one-size-fits-all. Depending on your research objectives and the type of data at hand, different methods offer unique benefits. If you want emotive customer stories, narrative analysis paints a strong picture. When you want to explain a score, thematic analysis reveals insightful patterns
  • Flexibility with Thematic Analysis: thematic analysis is like a chameleon in the toolkit of QDA. It adapts well to different types of data and research objectives, making it a top choice for any qualitative analysis.
  • Deeper Understanding, Better Products: QDA helps you dive into people's thoughts and feelings. This deep understanding helps you build products and services that truly matches what people want, ensuring satisfied customers
  • Finding the Unexpected: Qualitative data often reveals surprises that we miss in quantitative data. QDA offers us new ideas and perspectives, for insights we might otherwise miss.
  • Building Effective Strategies: Insights from QDA are like strategic guides. They help businesses in crafting plans that match people’s desires.
  • Creating Genuine Connections: Understanding people’s experiences lets businesses connect on a real level. This genuine connection helps build trust and loyalty, priceless for any business.

How to do Qualitative Data Analysis: 5 steps

Now we are going to show how you can do your own qualitative data analysis. We will guide you through this process step by step. As mentioned earlier, you will learn how to do qualitative data analysis manually , and also automatically using modern qualitative data and thematic analysis software.

To get best value from the analysis process and research process, it’s important to be super clear about the nature and scope of the question that’s being researched. This will help you select the research collection channels that are most likely to help you answer your question.

Depending on if you are a business looking to understand customer sentiment, or an academic surveying a school, your approach to qualitative data analysis will be unique.

Once you’re clear, there’s a sequence to follow. And, though there are differences in the manual and automatic approaches, the process steps are mostly the same.

The use case for our step-by-step guide is a company looking to collect data (customer feedback data), and analyze the customer feedback - in order to improve customer experience. By analyzing the customer feedback the company derives insights about their business and their customers. You can follow these same steps regardless of the nature of your research. Let’s get started.

Step 1: Gather your qualitative data and conduct research (Conduct qualitative research)

The first step of qualitative research is to do data collection. Put simply, data collection is gathering all of your data for analysis. A common situation is when qualitative data is spread across various sources.

Classic methods of gathering qualitative data

Most companies use traditional methods for gathering qualitative data: conducting interviews with research participants, running surveys, and running focus groups. This data is typically stored in documents, CRMs, databases and knowledge bases. It’s important to examine which data is available and needs to be included in your research project, based on its scope.

Using your existing qualitative feedback

As it becomes easier for customers to engage across a range of different channels, companies are gathering increasingly large amounts of both solicited and unsolicited qualitative feedback.

Most organizations have now invested in Voice of Customer programs , support ticketing systems, chatbot and support conversations, emails and even customer Slack chats.

These new channels provide companies with new ways of getting feedback, and also allow the collection of unstructured feedback data at scale.

The great thing about this data is that it contains a wealth of valubale insights and that it’s already there! When you have a new question about user behavior or your customers, you don’t need to create a new research study or set up a focus group. You can find most answers in the data you already have.

Typically, this data is stored in third-party solutions or a central database, but there are ways to export it or connect to a feedback analysis solution through integrations or an API.

Utilize untapped qualitative data channels

There are many online qualitative data sources you may not have considered. For example, you can find useful qualitative data in social media channels like Twitter or Facebook. Online forums, review sites, and online communities such as Discourse or Reddit also contain valuable data about your customers, or research questions.

If you are considering performing a qualitative benchmark analysis against competitors - the internet is your best friend. Gathering feedback in competitor reviews on sites like Trustpilot, G2, Capterra, Better Business Bureau or on app stores is a great way to perform a competitor benchmark analysis.

Customer feedback analysis software often has integrations into social media and review sites, or you could use a solution like DataMiner to scrape the reviews.

G2.com reviews of the product Airtable. You could pull reviews from G2 for your analysis.

Step 2: Connect & organize all your qualitative data

Now you all have this qualitative data but there’s a problem, the data is unstructured. Before feedback can be analyzed and assigned any value, it needs to be organized in a single place. Why is this important? Consistency!

If all data is easily accessible in one place and analyzed in a consistent manner, you will have an easier time summarizing and making decisions based on this data.

The manual approach to organizing your data

The classic method of structuring qualitative data is to plot all the raw data you’ve gathered into a spreadsheet.

Typically, research and support teams would share large Excel sheets and different business units would make sense of the qualitative feedback data on their own. Each team collects and organizes the data in a way that best suits them, which means the feedback tends to be kept in separate silos.

An alternative and a more robust solution is to store feedback in a central database, like Snowflake or Amazon Redshift .

Keep in mind that when you organize your data in this way, you are often preparing it to be imported into another software. If you go the route of a database, you would need to use an API to push the feedback into a third-party software.

Computer-assisted qualitative data analysis software (CAQDAS)

Traditionally within the manual analysis approach (but not always), qualitative data is imported into CAQDAS software for coding.

In the early 2000s, CAQDAS software was popularised by developers such as ATLAS.ti, NVivo and MAXQDA and eagerly adopted by researchers to assist with the organizing and coding of data.  

The benefits of using computer-assisted qualitative data analysis software:

  • Assists in the organizing of your data
  • Opens you up to exploring different interpretations of your data analysis
  • Allows you to share your dataset easier and allows group collaboration (allows for secondary analysis)

However you still need to code the data, uncover the themes and do the analysis yourself. Therefore it is still a manual approach.

The user interface of CAQDAS software 'NVivo'

Organizing your qualitative data in a feedback repository

Another solution to organizing your qualitative data is to upload it into a feedback repository where it can be unified with your other data , and easily searchable and taggable. There are a number of software solutions that act as a central repository for your qualitative research data. Here are a couple solutions that you could investigate:  

  • Dovetail: Dovetail is a research repository with a focus on video and audio transcriptions. You can tag your transcriptions within the platform for theme analysis. You can also upload your other qualitative data such as research reports, survey responses, support conversations, and customer interviews. Dovetail acts as a single, searchable repository. And makes it easier to collaborate with other people around your qualitative research.
  • EnjoyHQ: EnjoyHQ is another research repository with similar functionality to Dovetail. It boasts a more sophisticated search engine, but it has a higher starting subscription cost.

Organizing your qualitative data in a feedback analytics platform

If you have a lot of qualitative customer or employee feedback, from the likes of customer surveys or employee surveys, you will benefit from a feedback analytics platform. A feedback analytics platform is a software that automates the process of both sentiment analysis and thematic analysis . Companies use the integrations offered by these platforms to directly tap into their qualitative data sources (review sites, social media, survey responses, etc.). The data collected is then organized and analyzed consistently within the platform.

If you have data prepared in a spreadsheet, it can also be imported into feedback analytics platforms.

Once all this rich data has been organized within the feedback analytics platform, it is ready to be coded and themed, within the same platform. Thematic is a feedback analytics platform that offers one of the largest libraries of integrations with qualitative data sources.

Some of qualitative data integrations offered by Thematic

Step 3: Coding your qualitative data

Your feedback data is now organized in one place. Either within your spreadsheet, CAQDAS, feedback repository or within your feedback analytics platform. The next step is to code your feedback data so we can extract meaningful insights in the next step.

Coding is the process of labelling and organizing your data in such a way that you can then identify themes in the data, and the relationships between these themes.

To simplify the coding process, you will take small samples of your customer feedback data, come up with a set of codes, or categories capturing themes, and label each piece of feedback, systematically, for patterns and meaning. Then you will take a larger sample of data, revising and refining the codes for greater accuracy and consistency as you go.

If you choose to use a feedback analytics platform, much of this process will be automated and accomplished for you.

The terms to describe different categories of meaning (‘theme’, ‘code’, ‘tag’, ‘category’ etc) can be confusing as they are often used interchangeably.  For clarity, this article will use the term ‘code’.

To code means to identify key words or phrases and assign them to a category of meaning. “I really hate the customer service of this computer software company” would be coded as “poor customer service”.

How to manually code your qualitative data

  • Decide whether you will use deductive or inductive coding. Deductive coding is when you create a list of predefined codes, and then assign them to the qualitative data. Inductive coding is the opposite of this, you create codes based on the data itself. Codes arise directly from the data and you label them as you go. You need to weigh up the pros and cons of each coding method and select the most appropriate.
  • Read through the feedback data to get a broad sense of what it reveals. Now it’s time to start assigning your first set of codes to statements and sections of text.
  • Keep repeating step 2, adding new codes and revising the code description as often as necessary.  Once it has all been coded, go through everything again, to be sure there are no inconsistencies and that nothing has been overlooked.
  • Create a code frame to group your codes. The coding frame is the organizational structure of all your codes. And there are two commonly used types of coding frames, flat, or hierarchical. A hierarchical code frame will make it easier for you to derive insights from your analysis.
  • Based on the number of times a particular code occurs, you can now see the common themes in your feedback data. This is insightful! If ‘bad customer service’ is a common code, it’s time to take action.

We have a detailed guide dedicated to manually coding your qualitative data .

Example of a hierarchical coding frame in qualitative data analysis

Using software to speed up manual coding of qualitative data

An Excel spreadsheet is still a popular method for coding. But various software solutions can help speed up this process. Here are some examples.

  • CAQDAS / NVivo - CAQDAS software has built-in functionality that allows you to code text within their software. You may find the interface the software offers easier for managing codes than a spreadsheet.
  • Dovetail/EnjoyHQ - You can tag transcripts and other textual data within these solutions. As they are also repositories you may find it simpler to keep the coding in one platform.
  • IBM SPSS - SPSS is a statistical analysis software that may make coding easier than in a spreadsheet.
  • Ascribe - Ascribe’s ‘Coder’ is a coding management system. Its user interface will make it easier for you to manage your codes.

Automating the qualitative coding process using thematic analysis software

In solutions which speed up the manual coding process, you still have to come up with valid codes and often apply codes manually to pieces of feedback. But there are also solutions that automate both the discovery and the application of codes.

Advances in machine learning have now made it possible to read, code and structure qualitative data automatically. This type of automated coding is offered by thematic analysis software .

Automation makes it far simpler and faster to code the feedback and group it into themes. By incorporating natural language processing (NLP) into the software, the AI looks across sentences and phrases to identify common themes meaningful statements. Some automated solutions detect repeating patterns and assign codes to them, others make you train the AI by providing examples. You could say that the AI learns the meaning of the feedback on its own.

Thematic automates the coding of qualitative feedback regardless of source. There’s no need to set up themes or categories in advance. Simply upload your data and wait a few minutes. You can also manually edit the codes to further refine their accuracy.  Experiments conducted indicate that Thematic’s automated coding is just as accurate as manual coding .

Paired with sentiment analysis and advanced text analytics - these automated solutions become powerful for deriving quality business or research insights.

You could also build your own , if you have the resources!

The key benefits of using an automated coding solution

Automated analysis can often be set up fast and there’s the potential to uncover things that would never have been revealed if you had given the software a prescribed list of themes to look for.

Because the model applies a consistent rule to the data, it captures phrases or statements that a human eye might have missed.

Complete and consistent analysis of customer feedback enables more meaningful findings. Leading us into step 4.

Step 4: Analyze your data: Find meaningful insights

Now we are going to analyze our data to find insights. This is where we start to answer our research questions. Keep in mind that step 4 and step 5 (tell the story) have some overlap . This is because creating visualizations is both part of analysis process and reporting.

The task of uncovering insights is to scour through the codes that emerge from the data and draw meaningful correlations from them. It is also about making sure each insight is distinct and has enough data to support it.

Part of the analysis is to establish how much each code relates to different demographics and customer profiles, and identify whether there’s any relationship between these data points.

Manually create sub-codes to improve the quality of insights

If your code frame only has one level, you may find that your codes are too broad to be able to extract meaningful insights. This is where it is valuable to create sub-codes to your primary codes. This process is sometimes referred to as meta coding.

Note: If you take an inductive coding approach, you can create sub-codes as you are reading through your feedback data and coding it.

While time-consuming, this exercise will improve the quality of your analysis. Here is an example of what sub-codes could look like.

Example of sub-codes

You need to carefully read your qualitative data to create quality sub-codes. But as you can see, the depth of analysis is greatly improved. By calculating the frequency of these sub-codes you can get insight into which  customer service problems you can immediately address.

Correlate the frequency of codes to customer segments

Many businesses use customer segmentation . And you may have your own respondent segments that you can apply to your qualitative analysis. Segmentation is the practise of dividing customers or research respondents into subgroups.

Segments can be based on:

  • Demographic
  • And any other data type that you care to segment by

It is particularly useful to see the occurrence of codes within your segments. If one of your customer segments is considered unimportant to your business, but they are the cause of nearly all customer service complaints, it may be in your best interest to focus attention elsewhere. This is a useful insight!

Manually visualizing coded qualitative data

There are formulas you can use to visualize key insights in your data. The formulas we will suggest are imperative if you are measuring a score alongside your feedback.

If you are collecting a metric alongside your qualitative data this is a key visualization. Impact answers the question: “What’s the impact of a code on my overall score?”. Using Net Promoter Score (NPS) as an example, first you need to:

  • Calculate overall NPS
  • Calculate NPS in the subset of responses that do not contain that theme
  • Subtract B from A

Then you can use this simple formula to calculate code impact on NPS .

Visualizing qualitative data: Calculating the impact of a code on your score

You can then visualize this data using a bar chart.

You can download our CX toolkit - it includes a template to recreate this.

Trends over time

This analysis can help you answer questions like: “Which codes are linked to decreases or increases in my score over time?”

We need to compare two sequences of numbers: NPS over time and code frequency over time . Using Excel, calculate the correlation between the two sequences, which can be either positive (the more codes the higher the NPS, see picture below), or negative (the more codes the lower the NPS).

Now you need to plot code frequency against the absolute value of code correlation with NPS. Here is the formula:

Analyzing qualitative data: Calculate which codes are linked to increases or decreases in my score

The visualization could look like this:

Visualizing qualitative data trends over time

These are two examples, but there are more. For a third manual formula, and to learn why word clouds are not an insightful form of analysis, read our visualizations article .

Using a text analytics solution to automate analysis

Automated text analytics solutions enable codes and sub-codes to be pulled out of the data automatically. This makes it far faster and easier to identify what’s driving negative or positive results. And to pick up emerging trends and find all manner of rich insights in the data.

Another benefit of AI-driven text analytics software is its built-in capability for sentiment analysis, which provides the emotive context behind your feedback and other qualitative textual data therein.

Thematic provides text analytics that goes further by allowing users to apply their expertise on business context to edit or augment the AI-generated outputs.

Since the move away from manual research is generally about reducing the human element, adding human input to the technology might sound counter-intuitive. However, this is mostly to make sure important business nuances in the feedback aren’t missed during coding. The result is a higher accuracy of analysis. This is sometimes referred to as augmented intelligence .

Codes displayed by volume within Thematic. You can 'manage themes' to introduce human input.

Step 5: Report on your data: Tell the story

The last step of analyzing your qualitative data is to report on it, to tell the story. At this point, the codes are fully developed and the focus is on communicating the narrative to the audience.

A coherent outline of the qualitative research, the findings and the insights is vital for stakeholders to discuss and debate before they can devise a meaningful course of action.

Creating graphs and reporting in Powerpoint

Typically, qualitative researchers take the tried and tested approach of distilling their report into a series of charts, tables and other visuals which are woven into a narrative for presentation in Powerpoint.

Using visualization software for reporting

With data transformation and APIs, the analyzed data can be shared with data visualisation software, such as Power BI or Tableau , Google Studio or Looker. Power BI and Tableau are among the most preferred options.

Visualizing your insights inside a feedback analytics platform

Feedback analytics platforms, like Thematic, incorporate visualisation tools that intuitively turn key data and insights into graphs.  This removes the time consuming work of constructing charts to visually identify patterns and creates more time to focus on building a compelling narrative that highlights the insights, in bite-size chunks, for executive teams to review.

Using a feedback analytics platform with visualization tools means you don’t have to use a separate product for visualizations. You can export graphs into Powerpoints straight from the platforms.

Two examples of qualitative data visualizations within Thematic

Conclusion - Manual or Automated?

There are those who remain deeply invested in the manual approach - because it’s familiar, because they’re reluctant to spend money and time learning new software, or because they’ve been burned by the overpromises of AI.  

For projects that involve small datasets, manual analysis makes sense. For example, if the objective is simply to quantify a simple question like “Do customers prefer X concepts to Y?”. If the findings are being extracted from a small set of focus groups and interviews, sometimes it’s easier to just read them

However, as new generations come into the workplace, it’s technology-driven solutions that feel more comfortable and practical. And the merits are undeniable.  Especially if the objective is to go deeper and understand the ‘why’ behind customers’ preference for X or Y. And even more especially if time and money are considerations.

The ability to collect a free flow of qualitative feedback data at the same time as the metric means AI can cost-effectively scan, crunch, score and analyze a ton of feedback from one system in one go. And time-intensive processes like focus groups, or coding, that used to take weeks, can now be completed in a matter of hours or days.

But aside from the ever-present business case to speed things up and keep costs down, there are also powerful research imperatives for automated analysis of qualitative data: namely, accuracy and consistency.

Finding insights hidden in feedback requires consistency, especially in coding.  Not to mention catching all the ‘unknown unknowns’ that can skew research findings and steering clear of cognitive bias.

Some say without manual data analysis researchers won’t get an accurate “feel” for the insights. However, the larger data sets are, the harder it is to sort through the feedback and organize feedback that has been pulled from different places.  And, the more difficult it is to stay on course, the greater the risk of drawing incorrect, or incomplete, conclusions grows.

Though the process steps for qualitative data analysis have remained pretty much unchanged since psychologist Paul Felix Lazarsfeld paved the path a hundred years ago, the impact digital technology has had on types of qualitative feedback data and the approach to the analysis are profound.  

If you want to try an automated feedback analysis solution on your own qualitative data, you can get started with Thematic .

methods of data analysis for qualitative research

Community & Marketing

Tyler manages our community of CX, insights & analytics professionals. Tyler's goal is to help unite insights professionals around common challenges.

We make it easy to discover the customer and product issues that matter.

Unlock the value of feedback at scale, in one platform. Try it for free now!

  • Questions to ask your Feedback Analytics vendor
  • How to end customer churn for good
  • Scalable analysis of NPS verbatims
  • 5 Text analytics approaches
  • How to calculate the ROI of CX

Our experts will show you how Thematic works, how to discover pain points and track the ROI of decisions. To access your free trial, book a personal demo today.

Recent posts

Become a qualitative theming pro! Creating a perfect code frame is hard, but thematic analysis software makes the process much easier.

Qualtrics is one of the most well-known and powerful Customer Feedback Management platforms. But even so, it has limitations. We recently hosted a live panel where data analysts from two well-known brands shared their experiences with Qualtrics, and how they extended this platform’s capabilities. Below, we’ll share the

Customer feedback doesn't have all the answers. But it has critical insights for strategy and prioritization. Thematic is a B2B SaaS company. We aren't swimming in feedback. Every piece of feedback counts. Collecting and analyzing this feedback requires a different approach. We receive feedback from many places: * our in-product NPS

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is Qualitative Research? | Methods & Examples

What Is Qualitative Research? | Methods & Examples

Published on June 19, 2020 by Pritha Bhandari . Revised on June 22, 2023.

Qualitative research involves collecting and analyzing non-numerical data (e.g., text, video, or audio) to understand concepts, opinions, or experiences. It can be used to gather in-depth insights into a problem or generate new ideas for research.

Qualitative research is the opposite of quantitative research , which involves collecting and analyzing numerical data for statistical analysis.

Qualitative research is commonly used in the humanities and social sciences, in subjects such as anthropology, sociology, education, health sciences, history, etc.

  • How does social media shape body image in teenagers?
  • How do children and adults interpret healthy eating in the UK?
  • What factors influence employee retention in a large organization?
  • How is anxiety experienced around the world?
  • How can teachers integrate social issues into science curriculums?

Table of contents

Approaches to qualitative research, qualitative research methods, qualitative data analysis, advantages of qualitative research, disadvantages of qualitative research, other interesting articles, frequently asked questions about qualitative research.

Qualitative research is used to understand how people experience the world. While there are many approaches to qualitative research, they tend to be flexible and focus on retaining rich meaning when interpreting data.

Common approaches include grounded theory, ethnography , action research , phenomenological research, and narrative research. They share some similarities, but emphasize different aims and perspectives.

Note that qualitative research is at risk for certain research biases including the Hawthorne effect , observer bias , recall bias , and social desirability bias . While not always totally avoidable, awareness of potential biases as you collect and analyze your data can prevent them from impacting your work too much.

Prevent plagiarism. Run a free check.

Each of the research approaches involve using one or more data collection methods . These are some of the most common qualitative methods:

  • Observations: recording what you have seen, heard, or encountered in detailed field notes.
  • Interviews:  personally asking people questions in one-on-one conversations.
  • Focus groups: asking questions and generating discussion among a group of people.
  • Surveys : distributing questionnaires with open-ended questions.
  • Secondary research: collecting existing data in the form of texts, images, audio or video recordings, etc.
  • You take field notes with observations and reflect on your own experiences of the company culture.
  • You distribute open-ended surveys to employees across all the company’s offices by email to find out if the culture varies across locations.
  • You conduct in-depth interviews with employees in your office to learn about their experiences and perspectives in greater detail.

Qualitative researchers often consider themselves “instruments” in research because all observations, interpretations and analyses are filtered through their own personal lens.

For this reason, when writing up your methodology for qualitative research, it’s important to reflect on your approach and to thoroughly explain the choices you made in collecting and analyzing the data.

Qualitative data can take the form of texts, photos, videos and audio. For example, you might be working with interview transcripts, survey responses, fieldnotes, or recordings from natural settings.

Most types of qualitative data analysis share the same five steps:

  • Prepare and organize your data. This may mean transcribing interviews or typing up fieldnotes.
  • Review and explore your data. Examine the data for patterns or repeated ideas that emerge.
  • Develop a data coding system. Based on your initial ideas, establish a set of codes that you can apply to categorize your data.
  • Assign codes to the data. For example, in qualitative survey analysis, this may mean going through each participant’s responses and tagging them with codes in a spreadsheet. As you go through your data, you can create new codes to add to your system if necessary.
  • Identify recurring themes. Link codes together into cohesive, overarching themes.

There are several specific approaches to analyzing qualitative data. Although these methods share similar processes, they emphasize different concepts.

Qualitative research often tries to preserve the voice and perspective of participants and can be adjusted as new research questions arise. Qualitative research is good for:

  • Flexibility

The data collection and analysis process can be adapted as new ideas or patterns emerge. They are not rigidly decided beforehand.

  • Natural settings

Data collection occurs in real-world contexts or in naturalistic ways.

  • Meaningful insights

Detailed descriptions of people’s experiences, feelings and perceptions can be used in designing, testing or improving systems or products.

  • Generation of new ideas

Open-ended responses mean that researchers can uncover novel problems or opportunities that they wouldn’t have thought of otherwise.

Researchers must consider practical and theoretical limitations in analyzing and interpreting their data. Qualitative research suffers from:

  • Unreliability

The real-world setting often makes qualitative research unreliable because of uncontrolled factors that affect the data.

  • Subjectivity

Due to the researcher’s primary role in analyzing and interpreting data, qualitative research cannot be replicated . The researcher decides what is important and what is irrelevant in data analysis, so interpretations of the same data can vary greatly.

  • Limited generalizability

Small samples are often used to gather detailed data about specific contexts. Despite rigorous analysis procedures, it is difficult to draw generalizable conclusions because the data may be biased and unrepresentative of the wider population .

  • Labor-intensive

Although software can be used to manage and record large amounts of text, data analysis often has to be checked or performed manually.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square goodness of fit test
  • Degrees of freedom
  • Null hypothesis
  • Discourse analysis
  • Control groups
  • Mixed methods research
  • Non-probability sampling
  • Quantitative research
  • Inclusion and exclusion criteria

Research bias

  • Rosenthal effect
  • Implicit bias
  • Cognitive bias
  • Selection bias
  • Negativity bias
  • Status quo bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organization to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2023, June 22). What Is Qualitative Research? | Methods & Examples. Scribbr. Retrieved March 25, 2024, from https://www.scribbr.com/methodology/qualitative-research/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, qualitative vs. quantitative research | differences, examples & methods, how to do thematic analysis | step-by-step guide & examples, "i thought ai proofreading was useless but..".

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

No internet connection.

All search filters on the page have been cleared., your search has been saved..

  • All content
  • Dictionaries
  • Encyclopedias
  • Expert Insights
  • Foundations
  • How-to Guides
  • Journal Articles
  • Little Blue Books
  • Little Green Books
  • Project Planner
  • Tools Directory
  • Sign in to my profile My Profile

Not Logged In

  • Sign in Signed in
  • My profile My Profile

Not Logged In

The SAGE Handbook of Qualitative Data Analysis

  • Edited by: Uwe Flick
  • Publisher: SAGE Publications Ltd
  • Publication year: 2014
  • Online pub date: November 21, 2013
  • Discipline: Anthropology
  • Methods: Case study research , Coding , Narrative research
  • DOI: https:// doi. org/10.4135/9781446282243
  • Keywords: coding , content analysis , grounded theory , interviews , qualitative data analysis , qualitative research , social research Show all Show less
  • Print ISBN: 9781446208984
  • Online ISBN: 9781446282243
  • Buy the book icon link

Subject index

The wide range of approaches to data analysis in qualitative research can seem daunting even for experienced researchers. This handbook is the first to provide a state-of-the art overview of the whole field of QDA; from general analytic strategies used in qualitative research, to approaches specific to particular types of qualitative data, including talk, text, sounds, images and virtual data.

The handbook includes chapters on traditional analytic strategies such as grounded theory, content analysis, hermeneutics, phenomenology and narrative analysis, as well as coverage of newer trends like mixed methods, reanalysis and meta-analysis. Practical aspects such as sampling, transcription, working collaboratively, writing and implementation are given close attention, as are theory and theorization, reflexivity, and ethics.

Written by a team of experts in qualitative research from around the world, this handbook is an essential compendium for all qualitative researchers and students across the social sciences.

Front Matter

  • International Advisory Editorial Board
  • Endorsements
  • List of Tables and Figures
  • About the Editor
  • Notes on Contributors
  • Acknowledgements
  • Chapter 1 | Mapping the Field Mapping the Field
  • Chapter 2 | Notes Toward a Theory of Qualitative Data Analysis
  • Chapter 3 | Analytic Inspiration in Ethnographic Fieldwork
  • Chapter 4 | Sampling Strategies in Qualitative Research
  • Chapter 5 | Transcription as a Crucial Step of Data Analysis
  • Chapter 6 | Collaborative Analysis of Qualitative Data
  • Chapter 7 | Qualitative Comparative Practices: Dimensions, Cases and Strategies
  • Chapter 8 | Reflexivity and the Practice of Qualitative Research
  • Chapter 9 | Induction, Deduction, Abduction
  • Chapter 10 | Interpretation and Analysis1
  • Chapter 11 | Grounded Theory and Theoretical Coding
  • Chapter 12 | Qualitative Content Analysis
  • Chapter 13 | Phenomenology as a Research Method
  • Chapter 14 | Narrative Analysis: The Constructionist Approach
  • Chapter 15 | Documentary Method
  • Chapter 16 | Hermeneutics and Objective Hermeneutics
  • Chapter 17 | Cultural Studies
  • Chapter 18 | Netnographic Analysis: Understanding Culture Through Social Media Data
  • Chapter 19 | Using Software in Qualitative Analysis
  • Chapter 20 | Analysing Interviews
  • Chapter 21 | Analysing Focus Groups
  • Chapter 22 | Conversations and Conversation Analysis
  • Chapter 23 | Discourses and Discourse Analysis
  • Chapter 24 | Analysing Observations
  • Chapter 25 | Analysing Documents
  • Chapter 26 | Analysing News Media
  • Chapter 27 | Analysing Images
  • Chapter 28 | Analysis of Film
  • Chapter 29 | Analysing Sounds
  • Chapter 30 | Video Analysis and Videography
  • Chapter 31 | Analysing Virtual Data
  • Chapter 32 | Reanalysis of Qualitative Data
  • Chapter 33 | Qualitative Meta-Analysis
  • Chapter 34 | Quality of Data Analysis
  • Chapter 35 | Ethical Use of Qualitative Data and Findings
  • Chapter 36 | Analytic Integration in Qualitatively Driven (QUAL) Mixed and Multiple Methods Designs
  • Chapter 37 | Generalization in and from Qualitative Analysis
  • Chapter 38 | Theorization from Data
  • Chapter 39 | Writing and/as Analysis or Performing the World1
  • Chapter 40 | Implementation: Putting Analyses into Practice

Back Matter

  • Author Index

Sign in to access this content

Get a 30 day free trial, more like this, sage recommends.

We found other relevant content for you on other Sage platforms.

Have you created a personal profile? Login or create a profile so that you can save clips, playlists and searches

  • Sign in/register

Navigating away from this page will delete your results

Please save your results to "My Self-Assessments" in your profile before navigating away from this page.

Sign in to my profile

Sign up for a free trial and experience all Sage Learning Resources have to offer.

You must have a valid academic email address to sign up.

Get off-campus access

  • View or download all content my institution has access to.

Sign up for a free trial and experience all Sage Research Methods has to offer.

  • view my profile
  • view my lists
  • Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Papyrology
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Evolution
  • Language Reference
  • Language Acquisition
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Media
  • Music and Religion
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Science
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Lifestyle, Home, and Garden
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Clinical Neuroscience
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Ethics
  • Business Strategy
  • Business History
  • Business and Technology
  • Business and Government
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic History
  • Economic Systems
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Theory
  • Politics and Law
  • Public Policy
  • Public Administration
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

The Oxford Handbook of Qualitative Research (2nd edn)

  • < Previous chapter
  • Next chapter >

29 Qualitative Data Analysis Strategies

Johnny Saldaña, School of Theatre and Film, Arizona State University

  • Published: 02 September 2020
  • Cite Icon Cite
  • Permissions Icon Permissions

This chapter provides an overview of selected qualitative data analysis strategies with a particular focus on codes and coding. Preparatory strategies for a qualitative research study and data management are first outlined. Six coding methods are then profiled using comparable interview data: process coding, in vivo coding, descriptive coding, values coding, dramaturgical coding, and versus coding. Strategies for constructing themes and assertions from the data follow. Analytic memo writing is woven throughout as a method for generating additional analytic insight. Next, display and arts-based strategies are provided, followed by recommended qualitative data analytic software programs and a discussion on verifying the researcher’s analytic findings.

Qualitative Data Analysis Strategies

Anthropologist Clifford Geertz ( 1983 ) charmingly mused, “Life is just a bowl of strategies” (p. 25). Strategy , as I use it here, refers to a carefully considered plan or method to achieve a particular goal. The goal in this case is to develop a write-up of your analytic work with the qualitative data you have been given and collected as part of a study. The plans and methods you might employ to achieve that goal are what this article profiles.

Some may perceive strategy as an inappropriate, if not manipulative, word, suggesting formulaic or regimented approaches to inquiry. I assure you that is not my intent. My use of strategy is dramaturgical in nature: Strategies are actions that characters in plays take to overcome obstacles to achieve their objectives. Actors portraying these characters rely on action verbs to generate belief within themselves and to motivate them as they interpret their lines and move appropriately on stage.

What I offer is a qualitative researcher’s array of actions from which to draw to overcome the obstacles to thinking to achieve an analysis of your data. But unlike the prescripted text of a play in which the obstacles, strategies, and outcomes have been predetermined by the playwright, your work must be improvisational—acting, reacting, and interacting with data on a moment-by-moment basis to determine what obstacles stand in your way and thus what strategies you should take to reach your goals.

Another intriguing quote to keep in mind comes from research methodologist Robert E. Stake ( 1995 ), who posited, “Good research is not about good methods as much as it is about good thinking” (p. 19). In other words, strategies can take you only so far. You can have a box full of tools, but if you do not know how to use them well or use them creatively, the collection seems rather purposeless. One of the best ways we learn is by doing . So, pick up one or more of these strategies (in the form of verbs) and take analytic action with your data. Also keep in mind that these are discussed in the order in which they may typically occur, although humans think cyclically, iteratively, and reverberatively, and each research project has its unique contexts and needs. Be prepared for your mind to jump purposefully and/or idiosyncratically from one strategy to another throughout the study.

Qualitative Data Analysis Strategy: To Foresee

To foresee in qualitative data analysis (QDA) is to reflect beforehand on what forms of data you will most likely need and collect, which thus informs what types of data analytic strategies you anticipate using. Analysis, in a way, begins even before you collect data (Saldaña & Omasta, 2018 ). As you design your research study in your mind and on a text editing page, one strategy is to consider what types of data you may need to help inform and answer your central and related research questions. Interview transcripts, participant observation field notes, documents, artifacts, photographs, video recordings, and so on are not only forms of data but also foundations for how you may plan to analyze them. A participant interview, for example, suggests that you will transcribe all or relevant portions of the recording and use both the transcription and the recording itself as sources for data analysis. Any analytic memos (discussed later) you make about your impressions of the interview also become data to analyze. Even the computing software you plan to employ will be relevant to data analysis because it may help or hinder your efforts.

As your research design formulates, compose one to two paragraphs that outline how your QDA may proceed. This will necessitate that you have some background knowledge of the vast array of methods available to you. Thus, surveying the literature is vital preparatory work.

Qualitative Data Analysis Strategy: To Survey

To survey in QDA is to look for and consider the applicability of the QDA literature in your field that may provide useful guidance for your forthcoming data analytic work. General sources in QDA will provide a good starting point for acquainting you with the data analysis strategies available for the variety of methodologies or genres in qualitative inquiry (e.g., ethnography, phenomenology, case study, arts-based research, mixed methods). One of the most accessible (and humorous) is Galman’s ( 2013 ) The Good, the Bad, and the Data , and one of the most richly detailed is Frederick J. Wertz et al.’s ( 2011 ) Five Ways of Doing Qualitative Analysis . The author’s core texts for this chapter come from The Coding Manual for Qualitative Researchers (Saldaña, 2016 ) and Qualitative Research: Analyzing Life (Saldaña & Omasta, 2018 ).

If your study’s methodology or approach is grounded theory, for example, then a survey of methods works by authors such as Barney G. Glaser, Anselm L. Strauss, Juliet Corbin, and, in particular, the prolific Kathy Charmaz ( 2014 ) may be expected. But there has been a recent outpouring of additional book publications in grounded theory by Birks and Mills ( 2015 ), Bryant ( 2017 ), Bryant and Charmaz ( 2019 ), and Stern and Porr ( 2011 ), plus the legacy of thousands of articles and chapters across many disciplines that have addressed grounded theory in their studies.

Fields such as education, psychology, social work, healthcare, and others also have their own QDA methods literature in the form of texts and journals, as well as international conferences and workshops for members of the profession. It is important to have had some university coursework and/or mentorship in qualitative research to suitably prepare you for the intricacies of QDA, and you must acknowledge that the emergent nature of qualitative inquiry may require you to adopt analysis strategies that differ from what you originally planned.

Qualitative Data Analysis Strategy: To Collect

To collect in QDA is to receive the data given to you by participants and those data you actively gather to inform your study. Qualitative data analysis is concurrent with data collection and management. As interviews are transcribed, field notes are fleshed out, and documents are filed, the researcher uses opportunities to carefully read the corpus and make preliminary notations directly on the data documents by highlighting, bolding, italicizing, or noting in some way any particularly interesting or salient portions. As these data are initially reviewed, the researcher also composes supplemental analytic memos that include first impressions, reminders for follow-up, preliminary connections, and other thinking matters about the phenomena at work.

Some of the most common fieldwork tools you might use to collect data are notepads, pens and pencils; file folders for hard-copy documents; a laptop, tablet, or desktop with text editing software (Microsoft Word and Excel are most useful) and Internet access; and a digital camera and voice recorder (functions available on many electronic devices such as smartphones). Some fieldworkers may even employ a digital video camera to record social action, as long as participant permissions have been secured. But everything originates from the researcher. Your senses are immersed in the cultural milieu you study, taking in and holding onto relevant details, or significant trivia , as I call them. You become a human camera, zooming out to capture the broad landscape of your field site one day and then zooming in on a particularly interesting individual or phenomenon the next. Your analysis is only as good as the data you collect.

Fieldwork can be an overwhelming experience because so many details of social life are happening in front of you. Take a holistic approach to your entrée, but as you become more familiar with the setting and participants, actively focus on things that relate to your research topic and questions. Keep yourself open to the intriguing, surprising, and disturbing (Sunstein & Chiseri-Strater, 2012 , p. 115), because these facets enrich your study by making you aware of the unexpected.

Qualitative Data Analysis Strategy: To Feel

To feel in QDA is to gain deep emotional insight into the social worlds you study and what it means to be human. Virtually everything we do has an accompanying emotion(s), and feelings are both reactions and stimuli for action. Others’ emotions clue you to their motives, values, attitudes, beliefs, worldviews, identities, and other subjective perceptions and interpretations. Acknowledge that emotional detachment is not possible in field research. Attunement to the emotional experiences of your participants plus sympathetic and empathetic responses to the actions around you are necessary in qualitative endeavors. Your own emotional responses during fieldwork are also data because they document the tacit and visceral. It is important during such analytic reflection to assess why your emotional reactions were as they were. But it is equally important not to let emotions alone steer the course of your study. A proper balance must be found between feelings and facts.

Qualitative Data Analysis Strategy: To Organize

To organize in QDA is to maintain an orderly repository of data for easy access and analysis. Even in the smallest of qualitative studies, a large amount of data will be collected across time. Prepare both a hard drive and hard-copy folders for digital data and paperwork, and back up all materials for security from loss. I recommend that each data unit (e.g., one interview transcript, one document, one day’s worth of field notes) have its own file, with subfolders specifying the data forms and research study logistics (e.g., interviews, field notes, documents, institutional review board correspondence, calendar).

For small-scale qualitative studies, I have found it quite useful to maintain one large master file with all participant and field site data copied and combined with the literature review and accompanying researcher analytic memos. This master file is used to cut and paste related passages together, deleting what seems unnecessary as the study proceeds and eventually transforming the document into the final report itself. Cosmetic devices such as font style, font size, rich text (italicizing, bolding, underlining, etc.), and color can help you distinguish between different data forms and highlight significant passages. For example, descriptive, narrative passages of field notes are logged in regular font. “Quotations, things spoken by participants, are logged in bold font.”   Observer’s comments, such as the researcher’s subjective impressions or analytic jottings, are set in italics.

Qualitative Data Analysis Strategy: To Jot

To jot in QDA is to write occasional, brief notes about your thinking or reminders for follow-up. A jot is a phrase or brief sentence that will fit on a standard-size sticky note. As data are brought and documented together, take some initial time to review their contents and jot some notes about preliminary patterns, participant quotes that seem vivid, anomalies in the data, and so forth.

As you work on a project, keep something to write with or to voice record with you at all times to capture your fleeting thoughts. You will most likely find yourself thinking about your research when you are not working exclusively on the project, and a “mental jot” may occur to you as you ruminate on logistical or analytic matters. Document the thought in some way for later retrieval and elaboration as an analytic memo.

Qualitative Data Analysis Strategy: To Prioritize

To prioritize in QDA is to determine which data are most significant in your corpus and which tasks are most necessary. During fieldwork, massive amounts of data in various forms may be collected, and your mind can be easily overwhelmed by the magnitude of the quantity, its richness, and its management. Decisions will need to be made about the most pertinent data because they help answer your research questions or emerge as salient pieces of evidence. As a sweeping generalization, approximately one half to two thirds of what you collect may become unnecessary as you proceed toward the more formal stages of QDA.

To prioritize in QDA is also to determine what matters most in your assembly of codes, categories, patterns, themes, assertions, propositions, and concepts. Return to your research purpose and questions to keep you framed for what the focus should be.

Qualitative Data Analysis Strategy: To Analyze

To analyze in QDA is to observe and discern patterns within data and to construct meanings that seem to capture their essences and essentials. Just as there are a variety of genres, elements, and styles of qualitative research, so too are there a variety of methods available for QDA. Analytic choices are most often based on what methods will harmonize with your genre selection and conceptual framework, what will generate the most sufficient answers to your research questions, and what will best represent and present the project’s findings.

Analysis can range from the factual to the conceptual to the interpretive. Analysis can also range from a straightforward descriptive account to an emergently constructed grounded theory to an evocatively composed short story. A qualitative research project’s outcomes may range from rigorously achieved, insightful answers to open-ended, evocative questions; from rich descriptive detail to a bullet-point list of themes; and from third-person, objective reportage to first-person, emotion-laden poetry. Just as there are multiple destinations in qualitative research, there are multiple pathways and journeys along the way.

Analysis is accelerated as you take cognitive ownership of your data. By reading and rereading the corpus, you gain intimate familiarity with its contents and begin to notice significant details as well as make new connections and insights about their meanings. Patterns, categories, themes, and their interrelationships become more evident the more you know the subtleties of the database.

Since qualitative research’s design, fieldwork, and data collection are most often provisional, emergent, and evolutionary processes, you reflect on and analyze the data as you gather them and proceed through the project. If preplanned methods are not working, you change them to secure the data you need. There is generally a postfieldwork period when continued reflection and more systematic data analysis occur, concurrent with or followed by additional data collection, if needed, and the more formal write-up of the study, which is in itself an analytic act. Through field note writing, interview transcribing, analytic memo writing, and other documentation processes, you gain cognitive ownership of your data; and the intuitive, tacit, synthesizing capabilities of your brain begin sensing patterns, making connections, and seeing the bigger picture. The purpose and outcome of data analysis is to reveal to others through fresh insights what we have observed and discovered about the human condition. Fortunately, there are heuristics for reorganizing and reflecting on your qualitative data to help you achieve that goal.

Qualitative Data Analysis Strategy: To Pattern

To pattern in QDA is to detect similarities within and regularities among the data you have collected. The natural world is filled with patterns because we, as humans, have constructed them as such. Stars in the night sky are not just a random assembly; our ancestors pieced them together to form constellations like the Big Dipper. A collection of flowers growing wild in a field has a pattern, as does an individual flower’s patterns of leaves and petals. Look at the physical objects humans have created and notice how pattern oriented we are in our construction, organization, and decoration. Look around you in your environment and notice how many patterns are evident on your clothing, in a room, and on most objects themselves. Even our sometimes mundane daily and long-term human actions are reproduced patterns in the form of routines, rituals, rules, roles, and relationships (Saldaña & Omasta, 2018 ).

This human propensity for pattern-making follows us into QDA. From the vast array of interview transcripts, field notes, documents, and other forms of data, there is this instinctive, hardwired need to bring order to the collection—not just to reorganize it but to look for and construct patterns out of it. The discernment of patterns is one of the first steps in the data analytic process, and the methods described next are recommended ways to construct them.

Qualitative Data Analysis Strategy: To Code

To code in QDA is to assign a truncated, symbolic meaning to each datum for purposes of qualitative analysis—primarily patterning and categorizing. Coding is a heuristic—a method of discovery—to the meanings of individual sections of data. These codes function as a way of patterning, classifying, and later reorganizing them into emergent categories for further analysis. Different types of codes exist for different types of research genres and qualitative data analytic approaches, but this chapter will focus on only a few selected methods. First, a code can be defined as follows:

A code in qualitative data analysis is most often a word or short phrase that symbolically assigns a summative, salient, essence-capturing, and/or evocative attribute for a portion of language-based or visual data. The data can consist of interview transcripts, participant observation field notes, journals, documents, open-ended survey responses, drawings, artifacts, photographs, video, Internet sites, e-mail correspondence, academic and fictional literature, and so on. The portion of data coded … can range in magnitude from a single word to a full paragraph, an entire page of text or a stream of moving images.… Just as a title represents and captures a book or film or poem’s primary content and essence, so does a code represent and capture a datum’s primary content and essence. (Saldaña, 2016 , p. 4)

One helpful precoding task is to divide or parse long selections of field note or interview transcript data into shorter stanzas . Stanza division unitizes or “chunks” the corpus into more manageable paragraph-like units for coding assignments and analysis. The transcript sample that follows illustrates one possible way of inserting line breaks between self-standing passages of interview text for easier readability.

Process Coding

As a first coding example, the following interview excerpt about an employed, single, lower middle-class adult male’s spending habits during a difficult economic period in the United States is coded in the right-hand margin in capital letters. The superscript numbers match the beginning of the datum unit with its corresponding code. This method is called process coding (Charmaz, 2014 ), and it uses gerunds (“-ing” words) exclusively to represent action suggested by the data. Processes can consist of observable human actions (e.g., BUYING BARGAINS), mental or internal processes (e.g., THINKING TWICE), and more conceptual ideas (e.g., APPRECIATING WHAT YOU’VE GOT). Notice that the interviewer’s (I) portions are not coded, just the participant’s (P). A code is applied each time the subtopic of the interview shifts—even within a stanza—and the same codes can (and should) be used more than once if the subtopics are similar. The central research question driving this qualitative study is, “In what ways are middle-class Americans influenced and affected by an economic recession?”

Different researchers analyzing this same piece of data may develop completely different codes, depending on their personal lenses, filters, and angles. The previous codes are only one person’s interpretation of what is happening in the data, not a definitive list. The process codes have transformed the raw data units into new symbolic representations for analysis. A listing of the codes applied to this interview transcript, in the order they appear, reads:

BUYING BARGAINS

QUESTIONING A PURCHASE

THINKING TWICE

STOCKING UP

REFUSING SACRIFICE

PRIORITIZING

FINDING ALTERNATIVES

LIVING CHEAPLY

NOTICING CHANGES

STAYING INFORMED

MAINTAINING HEALTH

PICKING UP THE TAB

APPRECIATING WHAT YOU’VE GOT

Coding the data is the first step in this approach to QDA, and categorization is just one of the next possible steps.

Qualitative Data Analysis Strategy: To Categorize

To categorize in QDA is to cluster similar or comparable codes into groups for pattern construction and further analysis. Humans categorize things in innumerable ways. Think of an average apartment or house’s layout. The rooms of a dwelling have been constructed or categorized by their builders and occupants according to function. A kitchen is designated as an area to store and prepare food and to store the cooking and dining materials, such as pots, pans, and utensils. A bedroom is designated for sleeping, a closet for clothing storage, a bathroom for bodily functions and hygiene, and so on. Each room is like a category in which related and relevant patterns of human action occur. There are exceptions now and then, such as eating breakfast in bed rather than in a dining area or living in a small studio apartment in which most possessions are contained within one large room (but nonetheless are most often organized and clustered into subcategories according to function and optimal use of space).

The point is that the patterns of social action we designate into categories during QDA are not perfectly bounded. Category construction is our best attempt to cluster the most seemingly alike things into the most seemingly appropriate groups. Categorizing is reorganizing and reordering the vast array of data from a study because it is from these smaller, larger, and meaning-rich units that we can better grasp the particular features of each one and the categories’ possible interrelationships with one another.

One analytic strategy with a list of codes is to classify them into similar clusters. The same codes share the same category, but it is also possible that a single code can merit its own group if you feel it is unique enough. After the codes have been classified, a category label is applied to each grouping. Sometimes a code can also double as a category name if you feel it best summarizes the totality of the cluster. Like coding, categorizing is an interpretive act, because there can be different ways of separating and collecting codes that seem to belong together. The cut-and-paste functions of text editing software are most useful for exploring which codes share something in common.

Below is my categorization of the 15 codes generated from the interview transcript presented earlier. Like the gerunds for process codes, the categories have also been labeled as “-ing” words to connote action. And there was no particular reason why 15 codes resulted in three categories—there could have been less or even more, but this is how the array came together after my reflections on which codes seemed to belong together. The category labels are ways of answering why they belong together. For at-a-glance differentiation, I place codes in CAPITAL LETTERS and categories in upper- and lowercase Bold Font :

Category 1: Thinking Strategically

Category 2: Spending Strategically

Category 3: Living Strategically

Notice that the three category labels share a common word: strategically . Where did this word come from? It came from analytic reflection on the original data, the codes, and the process of categorizing the codes and generating their category labels. It was the analyst’s choice based on the interpretation of what primary action was happening. Your categories generated from your coded data do not need to share a common word or phrase, but I find that this technique, when appropriate, helps build a sense of unity to the initial analytic scheme.

The three categories— Thinking Strategically, Spending Strategically , and Living Strategically —are then reflected on for how they might interact and interplay. This is where the next major facet of data analysis, analytic memos, enters the scheme. But a necessary section on the basic principles of interrelationship and analytic reasoning must precede that discussion.

Qualitative Data Analysis Strategy: To Interrelate

To interrelate in QDA is to propose connections within, between, and among the constituent elements of analyzed data. One task of QDA is to explore the ways our patterns and categories interact and interplay. I use these terms to suggest the qualitative equivalent of statistical correlation, but interaction and interplay are much more than a simple relationship. They imply interrelationship . Interaction refers to reverberative connections—for example, how one or more categories might influence and affect the others, how categories operate concurrently, or whether there is some kind of domino effect to them. Interplay refers to the structural and processual nature of categories—for example, whether some type of sequential order, hierarchy, or taxonomy exists; whether any overlaps occur; whether there is superordinate and subordinate arrangement; and what types of organizational frameworks or networks might exist among them. The positivist construct of cause and effect becomes influences and affects in QDA.

There can even be patterns of patterns and categories of categories if your mind thinks conceptually and abstractly enough. Our minds can intricately connect multiple phenomena, but only if the data and their analyses support the constructions. We can speculate about interaction and interplay all we want, but it is only through a more systematic investigation of the data—in other words, good thinking—that we can plausibly establish any possible interrelationships.

Qualitative Data Analysis Strategy: To Reason

To reason in QDA is to think in ways that lead to summative findings, causal probabilities, and evaluative conclusions. Unlike quantitative research, with its statistical formulas and established hypothesis-testing protocols, qualitative research has no standardized methods of data analysis. Rest assured, there are recommended guidelines from the field’s scholars and a legacy of analysis strategies from which to draw. But the primary heuristics (or methods of discovery) you apply during a study are retroductive, inductive, substructive, abductive , and deductive reasoning.

Retroduction is historic reconstruction, working backward to figure out how the current conditions came to exist. Induction is what we experientially explore and infer to be transferable from the particular to the general, based on an examination of the evidence and an accumulation of knowledge. Substruction takes things apart to more carefully examine the constituent elements of the whole. Abduction is surmising from a range of possibilities that which is most likely, those explanatory hunches of plausibility based on clues. Deduction is what we generally draw and conclude from established facts and evidence.

It is not always necessary to know the names of these five ways of reasoning as you proceed through analysis. In fact, you will more than likely reverberate quickly from one to another depending on the task at hand. But what is important to remember about reasoning is:

to examine the evidence carefully and make reasonable inferences;

to base your conclusions primarily on the participants’ experiences, not just your own;

not to take the obvious for granted, because sometimes the expected will not happen;

your hunches can be right and, at other times, quite wrong; and

to logically yet imaginatively think about what is going on and how it all comes together.

Futurists and inventors propose three questions when they think about creating new visions for the world: What is possible (induction)? What is plausible (abduction)? What is preferable (deduction)? These same three questions might be posed as you proceed through QDA and particularly through analytic memo writing, which is substructive and retroductive reflection on your analytic work thus far.

Qualitative Data Analysis Strategy: To Memo

To memo in QDA is to reflect in writing on the nuances, inferences, meanings, and transfer of coded and categorized data plus your analytic processes. Like field note writing, perspectives vary among practitioners as to the methods for documenting the researcher’s analytic insights and subjective experiences. Some advise that such reflections should be included in field notes as relevant to the data. Others advise that a separate researcher’s journal should be maintained for recording these impressions. And still others advise that these thoughts be documented as separate analytic memos. I prescribe the latter as a method because it is generated by and directly connected to the data themselves.

An analytic memo is a “think piece” of reflective free writing, a narrative that sets in words your interpretations of the data. Coding and categorizing are heuristics to detect some of the possible patterns and interrelationships at work within the corpus, and an analytic memo further articulates your retroductive, inductive, substructive, abductive, and deductive thinking processes on what things may mean. Though the metaphor is a bit flawed and limiting, think of codes and their consequent categories as separate jigsaw puzzle pieces and their integration into an analytic memo as the trial assembly of the complete picture.

What follows is an example of an analytic memo based on the earlier process coded and categorized interview transcript. It is intended not as the final write-up for a publication, but as an open-ended reflection on the phenomena and processes suggested by the data and their analysis thus far. As the study proceeds, however, initial and substantive analytic memos can be revisited and revised for eventual integration into the final report. Note how the memo is dated and given a title for future and further categorization, how participant quotes are occasionally included for evidentiary support, and how the category names are bolded and the codes kept in capital letters to show how they integrate or weave into the thinking:

April 14, 2017 EMERGENT CATEGORIES: A STRATEGIC AMALGAM There’s a popular saying: “Smart is the new rich.” This participant is Thinking Strategically about his spending through such tactics as THINKING TWICE and QUESTIONING A PURCHASE before he decides to invest in a product. There’s a heightened awareness of both immediate trends and forthcoming economic bad news that positively affects his Spending Strategically . However, he seems unaware that there are even more ways of LIVING CHEAPLY by FINDING ALTERNATIVES. He dines at all-you-can-eat restaurants as a way of STOCKING UP on meals, but doesn’t state that he could bring lunch from home to work, possibly saving even more money. One of his “bad habits” is cigarettes, which he refuses to give up; but he doesn’t seem to realize that by quitting smoking he could save even more money, not to mention possible health care costs. He balks at the idea of paying $2.00 for a soft drink, but doesn’t mind paying $6.00–$7.00 for a pack of cigarettes. Penny-wise and pound-foolish. Addictions skew priorities. Living Strategically , for this participant during “scary times,” appears to be a combination of PRIORITIZING those things which cannot be helped, such as pet care and personal dental care; REFUSING SACRIFICE for maintaining personal creature-comforts; and FINDING ALTERNATIVES to high costs and excessive spending. Living Strategically is an amalgam of thinking and action-oriented strategies.

There are several recommended topics for analytic memo writing throughout the qualitative study. Memos are opportunities to reflect on and write about:

A descriptive summary of the data;

How the researcher personally relates to the participants and/or the phenomenon;

The participants’ actions, reactions, and interactions;

The participants’ routines, rituals, rules, roles, and relationships;

What is surprising, intriguing, or disturbing (Sunstein & Chiseri-Strater, 2012 , p. 115);

Code choices and their operational definitions;

Emergent patterns, categories, themes, concepts, assertions, and propositions;

The possible networks and processes (links, connections, overlaps, flows) among the codes, patterns, categories, themes, concepts, assertions, and propositions;

An emergent or related existent theory;

Any problems with the study;

Any personal or ethical dilemmas with the study;

Future directions for the study;

The analytic memos generated thus far (i.e., metamemos);

Tentative answers to the study’s research questions; and

The final report for the study. (adapted from Saldaña & Omasta, 2018 , p. 54)

Since writing is analysis, analytic memos expand on the inferential meanings of the truncated codes, categories, and patterns as a transitional stage into a more coherent narrative with hopefully rich social insight.

Qualitative Data Analysis Strategy: To Code—A Different Way

The first example of coding illustrated process coding, a way of exploring general social action among humans. But sometimes a researcher works with an individual case study in which the language is unique or with someone the researcher wishes to honor by maintaining the authenticity of his or her speech in the analysis. These reasons suggest that a more participant-centered form of coding may be more appropriate.

In Vivo Coding

A second frequently applied method of coding is called in vivo coding. The root meaning of in vivo is “in that which is alive”; it refers to a code based on the actual language used by the participant (Strauss, 1987 ). The words or phrases in the data record you select as codes are those that seem to stand out as significant or summative of what is being said.

Using the same transcript of the male participant living in difficult economic times, in vivo codes are listed in the right-hand column. I recommend that in vivo codes be placed in quotation marks as a way of designating that the code is extracted directly from the data record. Note that instead of 15 codes generated from process coding, the total number of in vivo codes is 30. This is not to suggest that there should be specific numbers or ranges of codes used for particular methods. In vivo codes, however, tend to be applied more frequently to data. Again, the interviewer’s questions and prompts are not coded, just the participant’s responses:

The 30 in vivo codes are then extracted from the transcript and could be listed in the order they appear, but this time they are placed in alphabetical order as a heuristic to prepare them for analytic action and reflection:

“ALL-YOU-CAN-EAT”

“ANOTHER DING IN MY WALLET”

“BAD HABITS”

“CHEAP AND FILLING”

“COUPLE OF THOUSAND”

“DON’T REALLY NEED”

“HAVEN’T CHANGED MY HABITS”

“HIGH MAINTENANCE”

“INSURANCE IS JUST WORTHLESS”

“IT ALL ADDS UP”

“LIVED KIND OF CHEAP”

“NOT A BIG SPENDER”

“NOT AS BAD OFF”

“NOT PUTTING AS MUCH INTO SAVINGS”

“PICK UP THE TAB”

“SCARY TIMES”

“SKYROCKETED”

“SPENDING MORE”

“THE LITTLE THINGS”

“THINK TWICE”

“TWO-FOR-ONE”

Even though no systematic categorization has been conducted with the codes thus far, an analytic memo of first impressions can still be composed:

March 19, 2017 CODE CHOICES: THE EVERYDAY LANGUAGE OF ECONOMICS After eyeballing the in vivo codes list, I noticed that variants of “CHEAP” appear most often. I recall a running joke between me and a friend of mine when we were shopping for sales. We’d say, “We’re not ‘cheap,’ we’re frugal .” There’s no formal economic or business language in this transcript—no terms such as “recession” or “downsizing”—just the everyday language of one person trying to cope during “SCARY TIMES” with “ANOTHER DING IN MY WALLET.” The participant notes that he’s always “LIVED KIND OF CHEAP” and is “NOT A BIG SPENDER” and, due to his employment, “NOT AS BAD OFF” as others in the country. Yet even with his middle class status, he’s still feeling the monetary pinch, dining at inexpensive “ALL-YOU-CAN-EAT” restaurants and worried about the rising price of peanut butter, observing that he’s “NOT PUTTING AS MUCH INTO SAVINGS” as he used to. Of all the codes, “ANOTHER DING IN MY WALLET” stands out to me, particularly because on the audio recording he sounded bitter and frustrated. It seems that he’s so concerned about “THE LITTLE THINGS” because of high veterinary and dental charges. The only way to cope with a “COUPLE OF THOUSAND” dollars worth of medical expenses is to find ways of trimming the excess in everyday facets of living: “IT ALL ADDS UP.”

Like process coding, in vivo codes could be clustered into similar categories, but another simple data analytic strategy is also possible.

Qualitative Data Analysis Strategy: To Outline

To outline in QDA is to hierarchically, processually, and/or temporally assemble such things as codes, categories, themes, assertions, propositions, and concepts into a coherent, text-based display. Traditional outlining formats and content provide not only templates for writing a report but also templates for analytic organization. This principle can be found in several computer-assisted qualitative data analysis software (CAQDAS) programs through their use of such functions as “hierarchies,” “trees,” and “nodes,” for example. Basic outlining is simply a way of arranging primary, secondary, and subsecondary items into a patterned display. For example, an organized listing of things in a home might consist of the following:

Large appliances

Refrigerator

Stove-top oven

Microwave oven

Small appliances

Coffee maker

Dining room

In QDA, outlining may include descriptive nouns or topics but, depending on the study, it may also involve processes or phenomena in extended passages, such as in vivo codes or themes.

The complexity of what we learn in the field can be overwhelming, and outlining is a way of organizing and ordering that complexity so that it does not become complicated. The cut-and-paste and tab functions of a text editing page enable you to arrange and rearrange the salient items from your preliminary coded analytic work into a more streamlined flow. By no means do I suggest that the intricate messiness of life can always be organized into neatly formatted arrangements, but outlining is an analytic act that stimulates deep reflection on both the interconnectedness and the interrelationships of what we study. As an example, here are the 30 in vivo codes generated from the initial transcript analysis, arranged in such a way as to construct five major categories:

Now that the codes have been rearranged into an outline format, an analytic memo is composed to expand on the rationale and constructed meanings in progress:

March 19, 2017 NETWORKS: EMERGENT CATEGORIES The five major categories I constructed from the in vivo codes are: “SCARY TIMES,” “PRIORTY,” “ANOTHER DING IN MY WALLET,” “THE LITTLE THINGS,” and “LIVED KIND OF CHEAP.” One of the things that hit me today was that the reason he may be pinching pennies on smaller purchases is that he cannot control the larger ones he has to deal with. Perhaps the only way we can cope with or seem to have some sense of agency over major expenses is to cut back on the smaller ones that we can control. $1,000 for a dental bill? Skip lunch for a few days a week. Insulin medication to buy for a pet? Don’t buy a soft drink from a vending machine. Using this reasoning, let me try to interrelate and weave the categories together as they relate to this particular participant: During these scary economic times, he prioritizes his spending because there seems to be just one ding after another to his wallet. A general lifestyle of living cheaply and keeping an eye out for how to save money on the little things compensates for those major expenses beyond his control.

Qualitative Data Analysis Strategy: To Code—In Even More Ways

The process and in vivo coding examples thus far have demonstrated only two specific methods of 33 documented approaches (Saldaña, 2016 ). Which one(s) you choose for your analysis depends on such factors as your conceptual framework, the genre of qualitative research for your project, the types of data you collect, and so on. The following sections present four additional approaches available for coding qualitative data that you may find useful as starting points.

Descriptive Coding

Descriptive codes are primarily nouns that simply summarize the topic of a datum. This coding approach is particularly useful when you have different types of data gathered for one study, such as interview transcripts, field notes, open-ended survey responses, documents, and visual materials such as photographs. Descriptive codes not only help categorize but also index the data corpus’s basic contents for further analytic work. An example of an interview portion coded descriptively, taken from the participant living in tough economic times, follows to illustrate how the same data can be coded in multiple ways:

For initial analysis, descriptive codes are clustered into similar categories to detect such patterns as frequency (i.e., categories with the largest number of codes) and interrelationship (i.e., categories that seem to connect in some way). Keep in mind that descriptive coding should be used sparingly with interview transcript data because other coding methods will reveal richer participant dynamics.

Values Coding

Values coding identifies the values, attitudes, and beliefs of a participant, as shared by the individual and/or interpreted by the analyst. This coding method infers the “heart and mind” of an individual or group’s worldview as to what is important, perceived as true, maintained as opinion, and felt strongly. The three constructs are coded separately but are part of a complex interconnected system.

Briefly, a value (V) is what we attribute as important, be it a person, thing, or idea. An attitude (A) is the evaluative way we think and feel about ourselves, others, things, or ideas. A belief (B) is what we think and feel as true or necessary, formed from our “personal knowledge, experiences, opinions, prejudices, morals, and other interpretive perceptions of the social world” (Saldaña, 2016 , p. 132). Values coding explores intrapersonal, interpersonal, and cultural constructs, or ethos . It is an admittedly slippery task to code this way because it is sometimes difficult to discern what is a value, attitude, or belief since they are intricately interrelated. But the depth you can potentially obtain is rich. An example of values coding follows:

For analysis, categorize the codes for each of the three different constructs together (i.e., all values in one group, attitudes in a second group, and beliefs in a third group). Analytic memo writing about the patterns and possible interrelationships may reveal a more detailed and intricate worldview of the participant.

Dramaturgical Coding

Dramaturgical coding perceives life as performance and its participants as characters in a social drama. Codes are assigned to the data (i.e., a “play script”) that analyze the characters in action, reaction, and interaction. Dramaturgical coding of participants examines their objectives (OBJ) or wants, needs, and motives; the conflicts (CON) or obstacles they face as they try to achieve their objectives; the tactics (TAC) or strategies they employ to reach their objectives; their attitudes (ATT) toward others and their given circumstances; the particular emotions (EMO) they experience throughout; and their subtexts (SUB), or underlying and unspoken thoughts. The following is an example of dramaturgically coded data:

Not included in this particular interview excerpt are the emotions the participant may have experienced or talked about. His later line, “that’s another ding in my wallet,” would have been coded EMO: BITTER. A reader may not have inferred that specific emotion from seeing the line in print. But the interviewer, present during the event and listening carefully to the audio recording during transcription, noted that feeling in his tone of voice.

For analysis, group similar codes together (e.g., all objectives in one group, all conflicts in another group, all tactics in a third group) or string together chains of how participants deal with their circumstances to overcome their obstacles through tactics:

OBJ: SAVING MEAL MONEY → TAC: SKIPPING MEALS + COUPONS

Dramaturgical coding is particularly useful as preliminary work for narrative inquiry story development or arts-based research representations such as performance ethnography. The method explores how the individuals or groups manage problem solving in their daily lives.

Versus Coding

Versus (VS) coding identifies the conflicts, struggles, and power issues observed in social action, reaction, and interaction as an X VS Y code, such as MEN VS WOMEN, CONSERVATIVES VS LIBERALS, FAITH VS LOGIC, and so on. Conflicts are rarely this dichotomous; they are typically nuanced and much more complex. But humans tend to perceive these struggles with an US VS THEM mindset. The codes can range from the observable to the conceptual and can be applied to data that show humans in tension with others, themselves, or ideologies.

What follows are examples of versus codes applied to the case study participant’s descriptions of his major medical expenses:

As an initial analytic tactic, group the versus codes into one of three categories: the Stakeholders , their Perceptions and/or Actions , and the Issues at stake. Examine how the three interrelate and identify the central ideological conflict at work as an X VS Y category. Analytic memos and the final write-up can detail the nuances of the issues.

Remember that what has been profiled in this section is a broad brushstroke description of just a few basic coding processes, several of which can be compatibly mixed and matched within a single analysis (see Saldaña’s 2016   The Coding Manual for Qualitative Researchers for a complete discussion). Certainly with additional data, more in-depth analysis can occur, but coding is only one approach to extracting and constructing preliminary meanings from the data corpus. What now follows are additional methods for qualitative analysis.

Qualitative Data Analysis Strategy: To Theme

To theme in QDA is to construct summative, phenomenological meanings from data through extended passages of text. Unlike codes, which are most often single words or short phrases that symbolically represent a datum, themes are extended phrases or sentences that summarize the manifest (apparent) and latent (underlying) meanings of data (Auerbach & Silverstein, 2003 ; Boyatzis, 1998 ). Themes, intended to represent the essences and essentials of humans’ lived experiences, can also be categorized or listed in superordinate and subordinate outline formats as an analytic tactic.

Below is the interview transcript example used in the previous coding sections. (Hopefully you are not too fatigued at this point with the transcript, but it is important to know how inquiry with the same data set can be approached in several different ways.) During the investigation of the ways middle-class Americans are influenced and affected by an economic recession, the researcher noticed that participants’ stories exhibited facets of what he labeled economic intelligence , or EI (based on the formerly developed theories of Howard Gardner’s multiple intelligences and Daniel Goleman’s emotional intelligence). Notice how theming interprets what is happening through the use of two distinct phrases—ECONOMIC INTELLIGENCE IS (i.e., manifest or apparent meanings) and ECONOMIC INTELLIGENCE MEANS (i.e., latent or underlying meanings):

Unlike the 15 process codes and 30 in vivo codes in the previous examples, there are now 14 themes to work with. They could be listed in the order they appear, but one initial heuristic is to group them separately by “is” and “means” statements to detect any possible patterns (discussed later):

EI IS TAKING ADVANTAGE OF UNEXPECTED OPPORTUNITY

EI IS BUYING CHEAP

EI IS SAVING A FEW DOLLARS NOW AND THEN

EI IS SETTING PRIORITIES

EI IS FINDING CHEAPER FORMS OF ENTERTAINMENT

EI IS NOTICING PERSONAL AND NATIONAL ECONOMIC TRENDS

EI IS TAKING CARE OF ONE’S OWN HEALTH

EI MEANS THINKING BEFORE YOU ACT

EI MEANS SACRIFICE

EI MEANS KNOWING YOUR FLAWS

EI MEANS LIVING AN INEXPENSIVE LIFESTYLE

EI MEANS YOU CANNOT CONTROL EVERYTHING

EI MEANS KNOWING YOUR LUCK

There are several ways to categorize the themes as preparation for analytic memo writing. The first is to arrange them in outline format with superordinate and subordinate levels, based on how the themes seem to take organizational shape and structure. Simply cutting and pasting the themes in multiple arrangements on a text editing page eventually develops a sense of order to them. For example:

A second approach is to categorize the themes into similar clusters and to develop different category labels or theoretical constructs . A theoretical construct is an abstraction that transforms the central phenomenon’s themes into broader applications but can still use “is” and “means” as prompts to capture the bigger picture at work:

Theoretical Construct 1: EI Means Knowing the Unfortunate Present

Supporting Themes:

Theoretical Construct 2: EI Is Cultivating a Small Fortune

Theoretical Construct 3: EI Means a Fortunate Future

What follows is an analytic memo generated from the cut-and-paste arrangement of themes into “is” and “means” statements, into an outline, and into theoretical constructs:

March 19, 2017 EMERGENT THEMES: FORTUNE/FORTUNATELY/UNFORTUNATELY I first reorganized the themes by listing them in two groups: “is” and “means.” The “is” statements seemed to contain positive actions and constructive strategies for economic intelligence. The “means” statements held primarily a sense of caution and restriction with a touch of negativity thrown in. The first outline with two major themes, LIVING AN INEXPENSIVE LIFESTYLE and YOU CANNOT CONTROL EVERYTHING also had this same tone. This reminded me of the old children’s picture book, Fortunately/Unfortunately , and the themes of “fortune” as a motif for the three theoretical constructs came to mind. Knowing the Unfortunate Present means knowing what’s (most) important and what’s (mostly) uncontrollable in one’s personal economic life. Cultivating a Small Fortune consists of those small money-saving actions that, over time, become part of one’s lifestyle. A Fortunate Future consists of heightened awareness of trends and opportunities at micro and macro levels, with the understanding that health matters can idiosyncratically affect one’s fortune. These three constructs comprise this particular individual’s EI—economic intelligence.

Again, keep in mind that the examples for coding and theming were from one small interview transcript excerpt. The number of codes and their categorization would increase, given a longer interview and/or multiple interviews to analyze. But the same basic principles apply: codes and themes relegated into patterned and categorized forms are heuristics—stimuli for good thinking through the analytic memo-writing process on how everything plausibly interrelates. Methodologists vary in the number of recommended final categories that result from analysis, ranging anywhere from three to seven, with traditional grounded theorists prescribing one central or core category from coded work.

Qualitative Data Analysis Strategy: To Assert

To assert in QDA is to put forward statements that summarize particular fieldwork and analytic observations that the researcher believes credibly represent and transcend the experiences. Educational anthropologist Frederick Erickson ( 1986 ) wrote a significant and influential chapter on qualitative methods that outlined heuristics for assertion development . Assertions are declarative statements of summative synthesis, supported by confirming evidence from the data and revised when disconfirming evidence or discrepant cases require modification of the assertions. These summative statements are generated from an interpretive review of the data corpus and then supported and illustrated through narrative vignettes—reconstructed stories from field notes, interview transcripts, or other data sources that provide a vivid profile as part of the evidentiary warrant.

Coding or theming data can certainly precede assertion development as a way of gaining intimate familiarity with the data, but Erickson’s ( 1986 ) methods are a more admittedly intuitive yet systematic heuristic for analysis. Erickson promotes analytic induction and exploration of and inferences about the data, based on an examination of the evidence and an accumulation of knowledge. The goal is not to look for “proof” to support the assertions, but to look for plausibility of inference-laden observations about the local and particular social world under investigation.

Assertion development is the writing of general statements, plus subordinate yet related ones called subassertions and a major statement called a key assertion that represents the totality of the data. One also looks for key linkages between them, meaning that the key assertion links to its related assertions, which then link to their respective subassertions. Subassertions can include particulars about any discrepant related cases or specify components of their parent assertions.

Excerpts from the interview transcript of our case study will be used to illustrate assertion development at work. By now, you should be quite familiar with the contents, so I will proceed directly to the analytic example. First, there is a series of thematically related statements the participant makes:

“Buy one package of chicken, get the second one free. Now that was a bargain. And I got some.”

“With Sweet Tomatoes I get those coupons for a few bucks off for lunch, so that really helps.”

“I don’t go to movies anymore. I rent DVDs from Netflix or Redbox or watch movies online—so much cheaper than paying over ten or twelve bucks for a movie ticket.”

Assertions can be categorized into low-level and high-level inferences . Low-level inferences address and summarize what is happening within the particulars of the case or field site—the micro . High-level inferences extend beyond the particulars to speculate on what it means in the more general social scheme of things—the meso or macro . A reasonable low-level assertion about the three statements above collectively might read, The participant finds several small ways to save money during a difficult economic period . A high-level inference that transcends the case to the meso level might read, Selected businesses provide alternatives and opportunities to buy products and services at reduced rates during a recession to maintain consumer spending.

Assertions are instantiated (i.e., supported) by concrete instances of action or participant testimony, whose patterns lead to more general description outside the specific field site. The author’s interpretive commentary can be interspersed throughout the report, but the assertions should be supported with the evidentiary warrant . A few assertions and subassertions based on the case interview transcript might read as follows (and notice how high-level assertions serve as the paragraphs’ topic sentences):

Selected businesses provide alternatives and opportunities to buy products and services at reduced rates during a recession to maintain consumer spending. Restaurants, for example, need to find ways during difficult economic periods when potential customers may be opting to eat inexpensively at home rather than spending more money by dining out. Special offers can motivate cash-strapped clientele to patronize restaurants more frequently. An adult male dealing with such major expenses as underinsured dental care offers: “With Sweet Tomatoes I get those coupons for a few bucks off for lunch, so that really helps.” The film and video industries also seem to be suffering from a double-whammy during the current recession: less consumer spending on higher-priced entertainment, resulting in a reduced rate of movie theater attendance (recently 39 percent of the American population, according to a CNN report); coupled with a media technology and business revolution that provides consumers less costly alternatives through video rentals and Internet viewing: “I don’t go to movies anymore. I rent DVDs from Netflix or Redbox or watch movies online—so much cheaper than paying over ten or twelve bucks for a movie ticket.”

To clarify terminology, any assertion that has an if–then or predictive structure to it is called a proposition since it proposes a conditional event. For example, this assertion is also a proposition: “Special offers can motivate cash-strapped clientele to patronize restaurants more frequently.” Propositions are the building blocks of hypothesis testing in the field and for later theory construction. Research not only documents human action but also can sometimes formulate statements that predict it. This provides a transferable and generalizable quality in our findings to other comparable settings and contexts. And to clarify terminology further, all propositions are assertions, but not all assertions are propositions.

Particularizability —the search for specific and unique dimensions of action at a site and/or the specific and unique perspectives of an individual participant—is not intended to filter out trivial excess but to magnify the salient characteristics of local meaning. Although generalizable knowledge is difficult to formulate in qualitative inquiry since each naturalistic setting will contain its own unique set of social and cultural conditions, there will be some aspects of social action that are plausibly universal or “generic” across settings and perhaps even across time.

To work toward this, Erickson advocates that the interpretive researcher look for “concrete universals” by studying actions at a particular site in detail and then comparing those actions to actions at other sites that have also been studied in detail. The exhibit or display of these generalizable features is to provide a synoptic representation, or a view of the whole. What the researcher attempts to uncover is what is both particular and general at the site of interest, preferably from the perspective of the participants. It is from the detailed analysis of actions at a specific site that these universals can be concretely discerned, rather than abstractly constructed as in grounded theory.

In sum, assertion development is a qualitative data analytic strategy that relies on the researcher’s intense review of interview transcripts, field notes, documents, and other data to inductively formulate, with reasonable certainty, composite statements that credibly summarize and interpret participant actions and meanings and their possible representation of and transfer into broader social contexts and issues.

Qualitative Data Analysis Strategy: To Display

To display in QDA is to visually present the processes and dynamics of human or conceptual action represented in the data. Qualitative researchers use not only language but also illustrations to both analyze and display the phenomena and processes at work in the data. Tables, charts, matrices, flow diagrams, and other models and graphics help both you and your readers cognitively and conceptually grasp the essence and essentials of your findings. As you have seen thus far, even simple outlining of codes, categories, and themes is one visual tactic for organizing the scope of the data. Rich text, font, and format features such as italicizing, bolding, capitalizing, indenting, and bullet pointing provide simple emphasis to selected words and phrases within the longer narrative.

Think display was a phrase coined by methodologists Miles and Huberman ( 1994 ) to encourage the researcher to think visually as data were collected and analyzed. The magnitude of text can be essentialized into graphics for at-a-glance review. Bins in various shapes and lines of various thicknesses, along with arrows suggesting pathways and direction, render the study a portrait of action. Bins can include the names of codes, categories, concepts, processes, key participants, and/or groups.

As a simple example, Figure 29.1 illustrates the three categories’ interrelationship derived from process coding. It displays what could be the apex of this interaction, LIVING STRATEGICALLY, and its connections to THINKING STRATEGICALLY, which influences and affects SPENDING STRATEGICALLY.

Three categories’ interrelationship derived from process coding.

Figure 29.2 represents a slightly more complex (if not playful) model, based on the five major in vivo codes/categories generated from analysis. The graphic is used as a way of initially exploring the interrelationship and flow from one category to another. The use of different font styles, font sizes, and line and arrow thicknesses is intended to suggest the visual qualities of the participant’s language and his dilemmas—a way of heightening in vivo coding even further.

In vivo categories in rich text display

Accompanying graphics are not always necessary for a qualitative report. They can be very helpful for the researcher during the analytic stage as a heuristic for exploring how major ideas interrelate, but illustrations are generally included in published work when they will help supplement and clarify complex processes for readers. Photographs of the field setting or the participants (and only with their written permission) also provide evidentiary reality to the write-up and help your readers get a sense of being there.

Qualitative Data Analysis Strategy: To Narrate

To narrate in QDA is to create an evocative literary representation and presentation of the data in the form of creative nonfiction. All research reports are stories of one kind or another. But there is yet another approach to QDA that intentionally documents the research experience as story, in its traditional literary sense. Narrative inquiry serves to plot and story-line the participant’s experiences into what might be initially perceived as a fictional short story or novel. But the story is carefully crafted and creatively written to provide readers with an almost omniscient perspective about the participants’ worldview. The transformation of the corpus from database to creative nonfiction ranges from systematic transcript analysis to open-ended literary composition. The narrative, however, should be solidly grounded in and emerge from the data as a plausible rendering of social life.

The following is a narrative vignette based on interview transcript selections from the participant living through tough economic times:

Jack stood in front of the soft drink vending machine at work and looked almost worriedly at the selections. With both hands in his pants pockets, his fingers jingled the few coins he had inside them as he contemplated whether he could afford the purchase. Two dollars for a twenty-ounce bottle of Diet Coke. Two dollars. “I can practically get a two-liter bottle for that same price at the grocery store,” he thought. Then Jack remembered the upcoming dental surgery he needed—that would cost one thousand dollars—and the bottle of insulin and syringes he needed to buy for his diabetic, high maintenance cat—almost two hundred dollars. He sighed, took his hands out of his pockets, and walked away from the vending machine. He was skipping lunch that day anyway so he could stock up on dinner later at the cheap-but-filling all-you-can-eat Chinese buffet. He could get his Diet Coke there.

Narrative inquiry representations, like literature, vary in tone, style, and point of view. The common goal, however, is to create an evocative portrait of participants through the aesthetic power of literary form. A story does not always have to have a moral explicitly stated by its author. The reader reflects on personal meanings derived from the piece and how the specific tale relates to one’s self and the social world.

Qualitative Data Analysis Strategy: To Poeticize

To poeticize in QDA is to create an evocative literary representation and presentation of the data in poetic form. One approach to analyzing or documenting analytic findings is to strategically truncate interview transcripts, field notes, and other pertinent data into poetic structures. Like coding, poetic constructions capture the essence and essentials of data in a creative, evocative way. The elegance of the format attests to the power of carefully chosen language to represent and convey complex human experience.

In vivo codes (codes based on the actual words used by participants themselves) can provide imagery, symbols, and metaphors for rich category, theme, concept, and assertion development, in addition to evocative content for arts-based interpretations of the data. Poetic inquiry takes note of what words and phrases seem to stand out from the data corpus as rich material for reinterpretation. Using some of the participant’s own language from the interview transcript illustrated previously, a poetic reconstruction or “found poetry” might read as follows:

Scary Times Scary times … spending more   (another ding in my wallet) a couple of thousand   (another ding in my wallet) insurance is just worthless   (another ding in my wallet) pick up the tab   (another ding in my wallet) not putting as much into savings   (another ding in my wallet) It all adds up. Think twice:   don’t really need    skip Think twice, think cheap:   coupons   bargains   two-for-one    free Think twice, think cheaper:   stock up   all-you-can-eat    (cheap—and filling) It all adds up.

Anna Deavere Smith, a verbatim theatre performer, attests that people speak in forms of “organic poetry” in everyday life. Thus, in vivo codes can provide core material for poetic representation and presentation of lived experiences, potentially transforming the routine and mundane into the epic. Some researchers also find the genre of poetry to be the most effective way to compose original work that reflects their own fieldwork experiences and autoethnographic stories.

Qualitative Data Analysis Strategy: To Compute

To compute in QDA is to employ specialized software programs for qualitative data management and analysis. The acronym for computer-assisted qualitative data analysis software is CAQDAS. There are diverse opinions among practitioners in the field about the utility of such specialized programs for qualitative data management and analysis. The software, unlike statistical computation, does not actually analyze data for you at higher conceptual levels. These CAQDAS software packages serve primarily as a repository for your data (both textual and visual) that enables you to code them, and they can perform such functions as calculating the number of times a particular word or phrase appears in the data corpus (a particularly useful function for content analysis) and can display selected facets after coding, such as possible interrelationships. Basic software such as Microsoft Word and Excel provides utilities that can store and, with some preformatting and strategic entry, organize qualitative data to enable the researcher’s analytic review. The following Internet addresses are listed to help in exploring selected CAQDAS packages and obtaining demonstration/trial software; video tutorials are available on the companies’ websites and on YouTube:

ATLAS.ti: http://www.atlasti.com

Dedoose: http://www.dedoose.com

HyperRESEARCH: http://www.researchware.com

MAXQDA: http://www.maxqda.com

NVivo: http://www.qsrinternational.com

QDA Miner: http://www.provalisresearch.com

Quirkos: http://www.quirkos.com

Transana: http://www.transana.com

V-Note: http://www.v-note.org

Some qualitative researchers attest that the software is indispensable for qualitative data management, especially for large-scale studies. Others feel that the learning curve of most CAQDAS programs is too lengthy to be of pragmatic value, especially for small-scale studies. From my own experience, if you have an aptitude for picking up quickly on the scripts and syntax of software programs, explore one or more of the packages listed. If you are a novice to qualitative research, though, I recommend working manually or “by hand” for your first project so you can focus exclusively on the data and not on the software.

Qualitative Data Analysis Strategy: To Verify

To verify in QDA is to administer an audit of “quality control” to your analysis. After your data analysis and the development of key findings, you may be thinking to yourself, “Did I get it right?” “Did I learn anything new?” Reliability and validity are terms and constructs of the positivist quantitative paradigm that refer to the replicability and accuracy of measures. But in the qualitative paradigm, other constructs are more appropriate.

Credibility and trustworthiness (Lincoln & Guba, 1985 ) are two factors to consider when collecting and analyzing the data and presenting your findings. In our qualitative research projects, we must present a convincing story to our audiences that we “got it right” methodologically. In other words, the amount of time we spent in the field, the number of participants we interviewed, the analytic methods we used, the thinking processes evident to reach our conclusions, and so on should be “just right” to assure the reader that we have conducted our jobs soundly. But remember that we can never conclusively prove something; we can only, at best, convincingly suggest. Research is an act of persuasion.

Credibility in a qualitative research report can be established in several ways. First, citing the key writers of related works in your literature review is essential. Seasoned researchers will sometimes assess whether a novice has “done her homework” by reviewing the bibliography or references. You need not list everything that seminal writers have published about a topic, but their names should appear at least once as evidence that you know the field’s key figures and their work.

Credibility can also be established by specifying the particular data analysis methods you employed (e.g., “Interview transcripts were taken through two cycles of process coding, resulting in three primary categories”), through corroboration of data analysis with the participants themselves (e.g., “I asked my participants to read and respond to a draft of this report for their confirmation of accuracy and recommendations for revision”), or through your description of how data and findings were substantiated (e.g., “Data sources included interview transcripts, participant observation field notes, and participant response journals to gather multiple perspectives about the phenomenon”).

Data scientist W. Edwards Deming is attributed with offering this cautionary advice about making a convincing argument: “Without data, you’re just another person with an opinion.” Thus, researchers can also support their findings with relevant, specific evidence by quoting participants directly and/or including field note excerpts from the data corpus. These serve both as illustrative examples for readers and to present more credible testimony of what happened in the field.

Trustworthiness, or providing credibility to the writing, is when we inform the reader of our research processes. Some make the case by stating the duration of fieldwork (e.g., “Forty-five clock hours were spent in the field”; “The study extended over a 10-month period”). Others put forth the amounts of data they gathered (e.g., “Sixteen individuals were interviewed”; “My field notes totaled 157 pages”). Sometimes trustworthiness is established when we are up front or confessional with the analytic or ethical dilemmas we encountered (e.g., “It was difficult to watch the participant’s teaching effectiveness erode during fieldwork”; “Analysis was stalled until I recoded the entire data corpus with a new perspective”).

The bottom line is that credibility and trustworthiness are matters of researcher honesty and integrity . Anyone can write that he worked ethically, rigorously, and reflexively, but only the writer will ever know the truth. There is no shame if something goes wrong with your research. In fact, it is more than likely the rule, not the exception. Work and write transparently to achieve credibility and trustworthiness with your readers.

The length of this chapter does not enable me to expand on other QDA strategies such as to conceptualize, theorize, and write. Yet there are even more subtle thinking strategies to employ throughout the research enterprise, such as to synthesize, problematize, and create. Each researcher has his or her own ways of working, and deep reflexivity (another strategy) on your own methodology and methods as a qualitative inquirer throughout fieldwork and writing provides you with metacognitive awareness of data analysis processes and possibilities.

Data analysis is one of the most elusive practices in qualitative research, perhaps because it is a backstage, behind-the-scenes, in-your-head enterprise. It is not that there are no models to follow. It is just that each project is contextual and case specific. The unique data you collect from your unique research design must be approached with your unique analytic signature. It truly is a learning-by-doing process, so accept that and leave yourself open to discovery and insight as you carefully scrutinize the data corpus for patterns, categories, themes, concepts, assertions, propositions, and possibly new theories through strategic analysis.

Auerbach, C. F. , & Silverstein, L. B. ( 2003 ). Qualitative data: An introduction to coding and analysis . New York, NY: New York University Press.

Google Scholar

Google Preview

Birks, M. , & Mills, J. ( 2015 ). Grounded theory: A practical guide (2nd ed.). London, England: Sage.

Boyatzis, R. E. ( 1998 ). Transforming qualitative information: Thematic analysis and code development . Thousand Oaks, CA: Sage.

Bryant, A. ( 2017 ). Grounded theory and grounded theorizing: Pragmatism in research practice. New York, NY: Oxford.

Bryant, A. , & Charmaz, K. (Eds.). ( 2019 ). The Sage handbook of current developments in grounded theory . London, England: Sage.

Charmaz, K. ( 2014 ). Constructing grounded theory: A practical guide through qualitative analysis (2nd ed.). London, England: Sage.

Erickson, F. ( 1986 ). Qualitative methods in research on teaching. In M. C. Wittrock (Ed.), Handbook of research on teaching (3rd ed., pp. 119–161). New York, NY: Macmillan.

Galman, S. C. ( 2013 ). The good, the bad, and the data: Shane the lone ethnographer’s basic guide to qualitative data analysis. Walnut Creek, CA: Left Coast Press.

Geertz, C. ( 1983 ). Local knowledge: Further essays in interpretive anthropology . New York, NY: Basic Books.

Lincoln, Y. S. , & Guba, E. G. ( 1985 ). Naturalistic inquiry . Newbury Park, CA: Sage.

Miles, M. B. , & Huberman, A. M. ( 1994 ). Qualitative data analysis (2nd ed.). Thousand Oaks, CA: Sage.

Saldaña, J. ( 2016 ). The coding manual for qualitative researchers (3rd ed.). London, England: Sage.

Saldaña, J. , & Omasta, M. ( 2018 ). Qualitative research: Analyzing life . Thousand Oaks, CA: Sage.

Stake, R. E. ( 1995 ). The art of case study research . Thousand Oaks, CA: Sage.

Stern, P. N. , & Porr, C. J. ( 2011 ). Essentials of accessible grounded theory . Walnut Creek, CA: Left Coast Press.

Strauss, A. L. ( 1987 ). Qualitative analysis for social scientists . Cambridge, England: Cambridge University Press.

Sunstein, B. S. , & Chiseri-Strater, E. ( 2012 ). FieldWorking: Reading and writing research (4th ed.). Boston, MA: Bedford/St. Martin’s.

Wertz, F. J. , Charmaz, K. , McMullen, L. M. , Josselson, R. , Anderson, R. , & McSpadden, E. ( 2011 ). Five ways of doing qualitative analysis: Phenomenological psychology, grounded theory, discourse analysis, narrative research, and intuitive inquiry . New York, NY: Guilford Press.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

  • Open access
  • Published: 27 May 2020

How to use and assess qualitative research methods

  • Loraine Busetto   ORCID: orcid.org/0000-0002-9228-7875 1 ,
  • Wolfgang Wick 1 , 2 &
  • Christoph Gumbinger 1  

Neurological Research and Practice volume  2 , Article number:  14 ( 2020 ) Cite this article

686k Accesses

268 Citations

89 Altmetric

Metrics details

This paper aims to provide an overview of the use and assessment of qualitative research methods in the health sciences. Qualitative research can be defined as the study of the nature of phenomena and is especially appropriate for answering questions of why something is (not) observed, assessing complex multi-component interventions, and focussing on intervention improvement. The most common methods of data collection are document study, (non-) participant observations, semi-structured interviews and focus groups. For data analysis, field-notes and audio-recordings are transcribed into protocols and transcripts, and coded using qualitative data management software. Criteria such as checklists, reflexivity, sampling strategies, piloting, co-coding, member-checking and stakeholder involvement can be used to enhance and assess the quality of the research conducted. Using qualitative in addition to quantitative designs will equip us with better tools to address a greater range of research problems, and to fill in blind spots in current neurological research and practice.

The aim of this paper is to provide an overview of qualitative research methods, including hands-on information on how they can be used, reported and assessed. This article is intended for beginning qualitative researchers in the health sciences as well as experienced quantitative researchers who wish to broaden their understanding of qualitative research.

What is qualitative research?

Qualitative research is defined as “the study of the nature of phenomena”, including “their quality, different manifestations, the context in which they appear or the perspectives from which they can be perceived” , but excluding “their range, frequency and place in an objectively determined chain of cause and effect” [ 1 ]. This formal definition can be complemented with a more pragmatic rule of thumb: qualitative research generally includes data in form of words rather than numbers [ 2 ].

Why conduct qualitative research?

Because some research questions cannot be answered using (only) quantitative methods. For example, one Australian study addressed the issue of why patients from Aboriginal communities often present late or not at all to specialist services offered by tertiary care hospitals. Using qualitative interviews with patients and staff, it found one of the most significant access barriers to be transportation problems, including some towns and communities simply not having a bus service to the hospital [ 3 ]. A quantitative study could have measured the number of patients over time or even looked at possible explanatory factors – but only those previously known or suspected to be of relevance. To discover reasons for observed patterns, especially the invisible or surprising ones, qualitative designs are needed.

While qualitative research is common in other fields, it is still relatively underrepresented in health services research. The latter field is more traditionally rooted in the evidence-based-medicine paradigm, as seen in " research that involves testing the effectiveness of various strategies to achieve changes in clinical practice, preferably applying randomised controlled trial study designs (...) " [ 4 ]. This focus on quantitative research and specifically randomised controlled trials (RCT) is visible in the idea of a hierarchy of research evidence which assumes that some research designs are objectively better than others, and that choosing a "lesser" design is only acceptable when the better ones are not practically or ethically feasible [ 5 , 6 ]. Others, however, argue that an objective hierarchy does not exist, and that, instead, the research design and methods should be chosen to fit the specific research question at hand – "questions before methods" [ 2 , 7 , 8 , 9 ]. This means that even when an RCT is possible, some research problems require a different design that is better suited to addressing them. Arguing in JAMA, Berwick uses the example of rapid response teams in hospitals, which he describes as " a complex, multicomponent intervention – essentially a process of social change" susceptible to a range of different context factors including leadership or organisation history. According to him, "[in] such complex terrain, the RCT is an impoverished way to learn. Critics who use it as a truth standard in this context are incorrect" [ 8 ] . Instead of limiting oneself to RCTs, Berwick recommends embracing a wider range of methods , including qualitative ones, which for "these specific applications, (...) are not compromises in learning how to improve; they are superior" [ 8 ].

Research problems that can be approached particularly well using qualitative methods include assessing complex multi-component interventions or systems (of change), addressing questions beyond “what works”, towards “what works for whom when, how and why”, and focussing on intervention improvement rather than accreditation [ 7 , 9 , 10 , 11 , 12 ]. Using qualitative methods can also help shed light on the “softer” side of medical treatment. For example, while quantitative trials can measure the costs and benefits of neuro-oncological treatment in terms of survival rates or adverse effects, qualitative research can help provide a better understanding of patient or caregiver stress, visibility of illness or out-of-pocket expenses.

How to conduct qualitative research?

Given that qualitative research is characterised by flexibility, openness and responsivity to context, the steps of data collection and analysis are not as separate and consecutive as they tend to be in quantitative research [ 13 , 14 ]. As Fossey puts it : “sampling, data collection, analysis and interpretation are related to each other in a cyclical (iterative) manner, rather than following one after another in a stepwise approach” [ 15 ]. The researcher can make educated decisions with regard to the choice of method, how they are implemented, and to which and how many units they are applied [ 13 ]. As shown in Fig.  1 , this can involve several back-and-forth steps between data collection and analysis where new insights and experiences can lead to adaption and expansion of the original plan. Some insights may also necessitate a revision of the research question and/or the research design as a whole. The process ends when saturation is achieved, i.e. when no relevant new information can be found (see also below: sampling and saturation). For reasons of transparency, it is essential for all decisions as well as the underlying reasoning to be well-documented.

figure 1

Iterative research process

While it is not always explicitly addressed, qualitative methods reflect a different underlying research paradigm than quantitative research (e.g. constructivism or interpretivism as opposed to positivism). The choice of methods can be based on the respective underlying substantive theory or theoretical framework used by the researcher [ 2 ].

Data collection

The methods of qualitative data collection most commonly used in health research are document study, observations, semi-structured interviews and focus groups [ 1 , 14 , 16 , 17 ].

Document study

Document study (also called document analysis) refers to the review by the researcher of written materials [ 14 ]. These can include personal and non-personal documents such as archives, annual reports, guidelines, policy documents, diaries or letters.

Observations

Observations are particularly useful to gain insights into a certain setting and actual behaviour – as opposed to reported behaviour or opinions [ 13 ]. Qualitative observations can be either participant or non-participant in nature. In participant observations, the observer is part of the observed setting, for example a nurse working in an intensive care unit [ 18 ]. In non-participant observations, the observer is “on the outside looking in”, i.e. present in but not part of the situation, trying not to influence the setting by their presence. Observations can be planned (e.g. for 3 h during the day or night shift) or ad hoc (e.g. as soon as a stroke patient arrives at the emergency room). During the observation, the observer takes notes on everything or certain pre-determined parts of what is happening around them, for example focusing on physician-patient interactions or communication between different professional groups. Written notes can be taken during or after the observations, depending on feasibility (which is usually lower during participant observations) and acceptability (e.g. when the observer is perceived to be judging the observed). Afterwards, these field notes are transcribed into observation protocols. If more than one observer was involved, field notes are taken independently, but notes can be consolidated into one protocol after discussions. Advantages of conducting observations include minimising the distance between the researcher and the researched, the potential discovery of topics that the researcher did not realise were relevant and gaining deeper insights into the real-world dimensions of the research problem at hand [ 18 ].

Semi-structured interviews

Hijmans & Kuyper describe qualitative interviews as “an exchange with an informal character, a conversation with a goal” [ 19 ]. Interviews are used to gain insights into a person’s subjective experiences, opinions and motivations – as opposed to facts or behaviours [ 13 ]. Interviews can be distinguished by the degree to which they are structured (i.e. a questionnaire), open (e.g. free conversation or autobiographical interviews) or semi-structured [ 2 , 13 ]. Semi-structured interviews are characterized by open-ended questions and the use of an interview guide (or topic guide/list) in which the broad areas of interest, sometimes including sub-questions, are defined [ 19 ]. The pre-defined topics in the interview guide can be derived from the literature, previous research or a preliminary method of data collection, e.g. document study or observations. The topic list is usually adapted and improved at the start of the data collection process as the interviewer learns more about the field [ 20 ]. Across interviews the focus on the different (blocks of) questions may differ and some questions may be skipped altogether (e.g. if the interviewee is not able or willing to answer the questions or for concerns about the total length of the interview) [ 20 ]. Qualitative interviews are usually not conducted in written format as it impedes on the interactive component of the method [ 20 ]. In comparison to written surveys, qualitative interviews have the advantage of being interactive and allowing for unexpected topics to emerge and to be taken up by the researcher. This can also help overcome a provider or researcher-centred bias often found in written surveys, which by nature, can only measure what is already known or expected to be of relevance to the researcher. Interviews can be audio- or video-taped; but sometimes it is only feasible or acceptable for the interviewer to take written notes [ 14 , 16 , 20 ].

Focus groups

Focus groups are group interviews to explore participants’ expertise and experiences, including explorations of how and why people behave in certain ways [ 1 ]. Focus groups usually consist of 6–8 people and are led by an experienced moderator following a topic guide or “script” [ 21 ]. They can involve an observer who takes note of the non-verbal aspects of the situation, possibly using an observation guide [ 21 ]. Depending on researchers’ and participants’ preferences, the discussions can be audio- or video-taped and transcribed afterwards [ 21 ]. Focus groups are useful for bringing together homogeneous (to a lesser extent heterogeneous) groups of participants with relevant expertise and experience on a given topic on which they can share detailed information [ 21 ]. Focus groups are a relatively easy, fast and inexpensive method to gain access to information on interactions in a given group, i.e. “the sharing and comparing” among participants [ 21 ]. Disadvantages include less control over the process and a lesser extent to which each individual may participate. Moreover, focus group moderators need experience, as do those tasked with the analysis of the resulting data. Focus groups can be less appropriate for discussing sensitive topics that participants might be reluctant to disclose in a group setting [ 13 ]. Moreover, attention must be paid to the emergence of “groupthink” as well as possible power dynamics within the group, e.g. when patients are awed or intimidated by health professionals.

Choosing the “right” method

As explained above, the school of thought underlying qualitative research assumes no objective hierarchy of evidence and methods. This means that each choice of single or combined methods has to be based on the research question that needs to be answered and a critical assessment with regard to whether or to what extent the chosen method can accomplish this – i.e. the “fit” between question and method [ 14 ]. It is necessary for these decisions to be documented when they are being made, and to be critically discussed when reporting methods and results.

Let us assume that our research aim is to examine the (clinical) processes around acute endovascular treatment (EVT), from the patient’s arrival at the emergency room to recanalization, with the aim to identify possible causes for delay and/or other causes for sub-optimal treatment outcome. As a first step, we could conduct a document study of the relevant standard operating procedures (SOPs) for this phase of care – are they up-to-date and in line with current guidelines? Do they contain any mistakes, irregularities or uncertainties that could cause delays or other problems? Regardless of the answers to these questions, the results have to be interpreted based on what they are: a written outline of what care processes in this hospital should look like. If we want to know what they actually look like in practice, we can conduct observations of the processes described in the SOPs. These results can (and should) be analysed in themselves, but also in comparison to the results of the document analysis, especially as regards relevant discrepancies. Do the SOPs outline specific tests for which no equipment can be observed or tasks to be performed by specialized nurses who are not present during the observation? It might also be possible that the written SOP is outdated, but the actual care provided is in line with current best practice. In order to find out why these discrepancies exist, it can be useful to conduct interviews. Are the physicians simply not aware of the SOPs (because their existence is limited to the hospital’s intranet) or do they actively disagree with them or does the infrastructure make it impossible to provide the care as described? Another rationale for adding interviews is that some situations (or all of their possible variations for different patient groups or the day, night or weekend shift) cannot practically or ethically be observed. In this case, it is possible to ask those involved to report on their actions – being aware that this is not the same as the actual observation. A senior physician’s or hospital manager’s description of certain situations might differ from a nurse’s or junior physician’s one, maybe because they intentionally misrepresent facts or maybe because different aspects of the process are visible or important to them. In some cases, it can also be relevant to consider to whom the interviewee is disclosing this information – someone they trust, someone they are otherwise not connected to, or someone they suspect or are aware of being in a potentially “dangerous” power relationship to them. Lastly, a focus group could be conducted with representatives of the relevant professional groups to explore how and why exactly they provide care around EVT. The discussion might reveal discrepancies (between SOPs and actual care or between different physicians) and motivations to the researchers as well as to the focus group members that they might not have been aware of themselves. For the focus group to deliver relevant information, attention has to be paid to its composition and conduct, for example, to make sure that all participants feel safe to disclose sensitive or potentially problematic information or that the discussion is not dominated by (senior) physicians only. The resulting combination of data collection methods is shown in Fig.  2 .

figure 2

Possible combination of data collection methods

Attributions for icons: “Book” by Serhii Smirnov, “Interview” by Adrien Coquet, FR, “Magnifying Glass” by anggun, ID, “Business communication” by Vectors Market; all from the Noun Project

The combination of multiple data source as described for this example can be referred to as “triangulation”, in which multiple measurements are carried out from different angles to achieve a more comprehensive understanding of the phenomenon under study [ 22 , 23 ].

Data analysis

To analyse the data collected through observations, interviews and focus groups these need to be transcribed into protocols and transcripts (see Fig.  3 ). Interviews and focus groups can be transcribed verbatim , with or without annotations for behaviour (e.g. laughing, crying, pausing) and with or without phonetic transcription of dialects and filler words, depending on what is expected or known to be relevant for the analysis. In the next step, the protocols and transcripts are coded , that is, marked (or tagged, labelled) with one or more short descriptors of the content of a sentence or paragraph [ 2 , 15 , 23 ]. Jansen describes coding as “connecting the raw data with “theoretical” terms” [ 20 ]. In a more practical sense, coding makes raw data sortable. This makes it possible to extract and examine all segments describing, say, a tele-neurology consultation from multiple data sources (e.g. SOPs, emergency room observations, staff and patient interview). In a process of synthesis and abstraction, the codes are then grouped, summarised and/or categorised [ 15 , 20 ]. The end product of the coding or analysis process is a descriptive theory of the behavioural pattern under investigation [ 20 ]. The coding process is performed using qualitative data management software, the most common ones being InVivo, MaxQDA and Atlas.ti. It should be noted that these are data management tools which support the analysis performed by the researcher(s) [ 14 ].

figure 3

From data collection to data analysis

Attributions for icons: see Fig. 2 , also “Speech to text” by Trevor Dsouza, “Field Notes” by Mike O’Brien, US, “Voice Record” by ProSymbols, US, “Inspection” by Made, AU, and “Cloud” by Graphic Tigers; all from the Noun Project

How to report qualitative research?

Protocols of qualitative research can be published separately and in advance of the study results. However, the aim is not the same as in RCT protocols, i.e. to pre-define and set in stone the research questions and primary or secondary endpoints. Rather, it is a way to describe the research methods in detail, which might not be possible in the results paper given journals’ word limits. Qualitative research papers are usually longer than their quantitative counterparts to allow for deep understanding and so-called “thick description”. In the methods section, the focus is on transparency of the methods used, including why, how and by whom they were implemented in the specific study setting, so as to enable a discussion of whether and how this may have influenced data collection, analysis and interpretation. The results section usually starts with a paragraph outlining the main findings, followed by more detailed descriptions of, for example, the commonalities, discrepancies or exceptions per category [ 20 ]. Here it is important to support main findings by relevant quotations, which may add information, context, emphasis or real-life examples [ 20 , 23 ]. It is subject to debate in the field whether it is relevant to state the exact number or percentage of respondents supporting a certain statement (e.g. “Five interviewees expressed negative feelings towards XYZ”) [ 21 ].

How to combine qualitative with quantitative research?

Qualitative methods can be combined with other methods in multi- or mixed methods designs, which “[employ] two or more different methods [ …] within the same study or research program rather than confining the research to one single method” [ 24 ]. Reasons for combining methods can be diverse, including triangulation for corroboration of findings, complementarity for illustration and clarification of results, expansion to extend the breadth and range of the study, explanation of (unexpected) results generated with one method with the help of another, or offsetting the weakness of one method with the strength of another [ 1 , 17 , 24 , 25 , 26 ]. The resulting designs can be classified according to when, why and how the different quantitative and/or qualitative data strands are combined. The three most common types of mixed method designs are the convergent parallel design , the explanatory sequential design and the exploratory sequential design. The designs with examples are shown in Fig.  4 .

figure 4

Three common mixed methods designs

In the convergent parallel design, a qualitative study is conducted in parallel to and independently of a quantitative study, and the results of both studies are compared and combined at the stage of interpretation of results. Using the above example of EVT provision, this could entail setting up a quantitative EVT registry to measure process times and patient outcomes in parallel to conducting the qualitative research outlined above, and then comparing results. Amongst other things, this would make it possible to assess whether interview respondents’ subjective impressions of patients receiving good care match modified Rankin Scores at follow-up, or whether observed delays in care provision are exceptions or the rule when compared to door-to-needle times as documented in the registry. In the explanatory sequential design, a quantitative study is carried out first, followed by a qualitative study to help explain the results from the quantitative study. This would be an appropriate design if the registry alone had revealed relevant delays in door-to-needle times and the qualitative study would be used to understand where and why these occurred, and how they could be improved. In the exploratory design, the qualitative study is carried out first and its results help informing and building the quantitative study in the next step [ 26 ]. If the qualitative study around EVT provision had shown a high level of dissatisfaction among the staff members involved, a quantitative questionnaire investigating staff satisfaction could be set up in the next step, informed by the qualitative study on which topics dissatisfaction had been expressed. Amongst other things, the questionnaire design would make it possible to widen the reach of the research to more respondents from different (types of) hospitals, regions, countries or settings, and to conduct sub-group analyses for different professional groups.

How to assess qualitative research?

A variety of assessment criteria and lists have been developed for qualitative research, ranging in their focus and comprehensiveness [ 14 , 17 , 27 ]. However, none of these has been elevated to the “gold standard” in the field. In the following, we therefore focus on a set of commonly used assessment criteria that, from a practical standpoint, a researcher can look for when assessing a qualitative research report or paper.

Assessors should check the authors’ use of and adherence to the relevant reporting checklists (e.g. Standards for Reporting Qualitative Research (SRQR)) to make sure all items that are relevant for this type of research are addressed [ 23 , 28 ]. Discussions of quantitative measures in addition to or instead of these qualitative measures can be a sign of lower quality of the research (paper). Providing and adhering to a checklist for qualitative research contributes to an important quality criterion for qualitative research, namely transparency [ 15 , 17 , 23 ].

Reflexivity

While methodological transparency and complete reporting is relevant for all types of research, some additional criteria must be taken into account for qualitative research. This includes what is called reflexivity, i.e. sensitivity to the relationship between the researcher and the researched, including how contact was established and maintained, or the background and experience of the researcher(s) involved in data collection and analysis. Depending on the research question and population to be researched this can be limited to professional experience, but it may also include gender, age or ethnicity [ 17 , 27 ]. These details are relevant because in qualitative research, as opposed to quantitative research, the researcher as a person cannot be isolated from the research process [ 23 ]. It may influence the conversation when an interviewed patient speaks to an interviewer who is a physician, or when an interviewee is asked to discuss a gynaecological procedure with a male interviewer, and therefore the reader must be made aware of these details [ 19 ].

Sampling and saturation

The aim of qualitative sampling is for all variants of the objects of observation that are deemed relevant for the study to be present in the sample “ to see the issue and its meanings from as many angles as possible” [ 1 , 16 , 19 , 20 , 27 ] , and to ensure “information-richness [ 15 ]. An iterative sampling approach is advised, in which data collection (e.g. five interviews) is followed by data analysis, followed by more data collection to find variants that are lacking in the current sample. This process continues until no new (relevant) information can be found and further sampling becomes redundant – which is called saturation [ 1 , 15 ] . In other words: qualitative data collection finds its end point not a priori , but when the research team determines that saturation has been reached [ 29 , 30 ].

This is also the reason why most qualitative studies use deliberate instead of random sampling strategies. This is generally referred to as “ purposive sampling” , in which researchers pre-define which types of participants or cases they need to include so as to cover all variations that are expected to be of relevance, based on the literature, previous experience or theory (i.e. theoretical sampling) [ 14 , 20 ]. Other types of purposive sampling include (but are not limited to) maximum variation sampling, critical case sampling or extreme or deviant case sampling [ 2 ]. In the above EVT example, a purposive sample could include all relevant professional groups and/or all relevant stakeholders (patients, relatives) and/or all relevant times of observation (day, night and weekend shift).

Assessors of qualitative research should check whether the considerations underlying the sampling strategy were sound and whether or how researchers tried to adapt and improve their strategies in stepwise or cyclical approaches between data collection and analysis to achieve saturation [ 14 ].

Good qualitative research is iterative in nature, i.e. it goes back and forth between data collection and analysis, revising and improving the approach where necessary. One example of this are pilot interviews, where different aspects of the interview (especially the interview guide, but also, for example, the site of the interview or whether the interview can be audio-recorded) are tested with a small number of respondents, evaluated and revised [ 19 ]. In doing so, the interviewer learns which wording or types of questions work best, or which is the best length of an interview with patients who have trouble concentrating for an extended time. Of course, the same reasoning applies to observations or focus groups which can also be piloted.

Ideally, coding should be performed by at least two researchers, especially at the beginning of the coding process when a common approach must be defined, including the establishment of a useful coding list (or tree), and when a common meaning of individual codes must be established [ 23 ]. An initial sub-set or all transcripts can be coded independently by the coders and then compared and consolidated after regular discussions in the research team. This is to make sure that codes are applied consistently to the research data.

Member checking

Member checking, also called respondent validation , refers to the practice of checking back with study respondents to see if the research is in line with their views [ 14 , 27 ]. This can happen after data collection or analysis or when first results are available [ 23 ]. For example, interviewees can be provided with (summaries of) their transcripts and asked whether they believe this to be a complete representation of their views or whether they would like to clarify or elaborate on their responses [ 17 ]. Respondents’ feedback on these issues then becomes part of the data collection and analysis [ 27 ].

Stakeholder involvement

In those niches where qualitative approaches have been able to evolve and grow, a new trend has seen the inclusion of patients and their representatives not only as study participants (i.e. “members”, see above) but as consultants to and active participants in the broader research process [ 31 , 32 , 33 ]. The underlying assumption is that patients and other stakeholders hold unique perspectives and experiences that add value beyond their own single story, making the research more relevant and beneficial to researchers, study participants and (future) patients alike [ 34 , 35 ]. Using the example of patients on or nearing dialysis, a recent scoping review found that 80% of clinical research did not address the top 10 research priorities identified by patients and caregivers [ 32 , 36 ]. In this sense, the involvement of the relevant stakeholders, especially patients and relatives, is increasingly being seen as a quality indicator in and of itself.

How not to assess qualitative research

The above overview does not include certain items that are routine in assessments of quantitative research. What follows is a non-exhaustive, non-representative, experience-based list of the quantitative criteria often applied to the assessment of qualitative research, as well as an explanation of the limited usefulness of these endeavours.

Protocol adherence

Given the openness and flexibility of qualitative research, it should not be assessed by how well it adheres to pre-determined and fixed strategies – in other words: its rigidity. Instead, the assessor should look for signs of adaptation and refinement based on lessons learned from earlier steps in the research process.

Sample size

For the reasons explained above, qualitative research does not require specific sample sizes, nor does it require that the sample size be determined a priori [ 1 , 14 , 27 , 37 , 38 , 39 ]. Sample size can only be a useful quality indicator when related to the research purpose, the chosen methodology and the composition of the sample, i.e. who was included and why.

Randomisation

While some authors argue that randomisation can be used in qualitative research, this is not commonly the case, as neither its feasibility nor its necessity or usefulness has been convincingly established for qualitative research [ 13 , 27 ]. Relevant disadvantages include the negative impact of a too large sample size as well as the possibility (or probability) of selecting “ quiet, uncooperative or inarticulate individuals ” [ 17 ]. Qualitative studies do not use control groups, either.

Interrater reliability, variability and other “objectivity checks”

The concept of “interrater reliability” is sometimes used in qualitative research to assess to which extent the coding approach overlaps between the two co-coders. However, it is not clear what this measure tells us about the quality of the analysis [ 23 ]. This means that these scores can be included in qualitative research reports, preferably with some additional information on what the score means for the analysis, but it is not a requirement. Relatedly, it is not relevant for the quality or “objectivity” of qualitative research to separate those who recruited the study participants and collected and analysed the data. Experiences even show that it might be better to have the same person or team perform all of these tasks [ 20 ]. First, when researchers introduce themselves during recruitment this can enhance trust when the interview takes place days or weeks later with the same researcher. Second, when the audio-recording is transcribed for analysis, the researcher conducting the interviews will usually remember the interviewee and the specific interview situation during data analysis. This might be helpful in providing additional context information for interpretation of data, e.g. on whether something might have been meant as a joke [ 18 ].

Not being quantitative research

Being qualitative research instead of quantitative research should not be used as an assessment criterion if it is used irrespectively of the research problem at hand. Similarly, qualitative research should not be required to be combined with quantitative research per se – unless mixed methods research is judged as inherently better than single-method research. In this case, the same criterion should be applied for quantitative studies without a qualitative component.

The main take-away points of this paper are summarised in Table 1 . We aimed to show that, if conducted well, qualitative research can answer specific research questions that cannot to be adequately answered using (only) quantitative designs. Seeing qualitative and quantitative methods as equal will help us become more aware and critical of the “fit” between the research problem and our chosen methods: I can conduct an RCT to determine the reasons for transportation delays of acute stroke patients – but should I? It also provides us with a greater range of tools to tackle a greater range of research problems more appropriately and successfully, filling in the blind spots on one half of the methodological spectrum to better address the whole complexity of neurological research and practice.

Availability of data and materials

Not applicable.

Abbreviations

Endovascular treatment

Randomised Controlled Trial

Standard Operating Procedure

Standards for Reporting Qualitative Research

Philipsen, H., & Vernooij-Dassen, M. (2007). Kwalitatief onderzoek: nuttig, onmisbaar en uitdagend. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Qualitative research: useful, indispensable and challenging. In: Qualitative research: Practical methods for medical practice (pp. 5–12). Houten: Bohn Stafleu van Loghum.

Chapter   Google Scholar  

Punch, K. F. (2013). Introduction to social research: Quantitative and qualitative approaches . London: Sage.

Kelly, J., Dwyer, J., Willis, E., & Pekarsky, B. (2014). Travelling to the city for hospital care: Access factors in country aboriginal patient journeys. Australian Journal of Rural Health, 22 (3), 109–113.

Article   Google Scholar  

Nilsen, P., Ståhl, C., Roback, K., & Cairney, P. (2013). Never the twain shall meet? - a comparison of implementation science and policy implementation research. Implementation Science, 8 (1), 1–12.

Howick J, Chalmers I, Glasziou, P., Greenhalgh, T., Heneghan, C., Liberati, A., Moschetti, I., Phillips, B., & Thornton, H. (2011). The 2011 Oxford CEBM evidence levels of evidence (introductory document) . Oxford Center for Evidence Based Medicine. https://www.cebm.net/2011/06/2011-oxford-cebm-levels-evidence-introductory-document/ .

Eakin, J. M. (2016). Educating critical qualitative health researchers in the land of the randomized controlled trial. Qualitative Inquiry, 22 (2), 107–118.

May, A., & Mathijssen, J. (2015). Alternatieven voor RCT bij de evaluatie van effectiviteit van interventies!? Eindrapportage. In Alternatives for RCTs in the evaluation of effectiveness of interventions!? Final report .

Google Scholar  

Berwick, D. M. (2008). The science of improvement. Journal of the American Medical Association, 299 (10), 1182–1184.

Article   CAS   Google Scholar  

Christ, T. W. (2014). Scientific-based research and randomized controlled trials, the “gold” standard? Alternative paradigms and mixed methodologies. Qualitative Inquiry, 20 (1), 72–80.

Lamont, T., Barber, N., Jd, P., Fulop, N., Garfield-Birkbeck, S., Lilford, R., Mear, L., Raine, R., & Fitzpatrick, R. (2016). New approaches to evaluating complex health and care systems. BMJ, 352:i154.

Drabble, S. J., & O’Cathain, A. (2015). Moving from Randomized Controlled Trials to Mixed Methods Intervention Evaluation. In S. Hesse-Biber & R. B. Johnson (Eds.), The Oxford Handbook of Multimethod and Mixed Methods Research Inquiry (pp. 406–425). London: Oxford University Press.

Chambers, D. A., Glasgow, R. E., & Stange, K. C. (2013). The dynamic sustainability framework: Addressing the paradox of sustainment amid ongoing change. Implementation Science : IS, 8 , 117.

Hak, T. (2007). Waarnemingsmethoden in kwalitatief onderzoek. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Observation methods in qualitative research] (pp. 13–25). Houten: Bohn Stafleu van Loghum.

Russell, C. K., & Gregory, D. M. (2003). Evaluation of qualitative research studies. Evidence Based Nursing, 6 (2), 36–40.

Fossey, E., Harvey, C., McDermott, F., & Davidson, L. (2002). Understanding and evaluating qualitative research. Australian and New Zealand Journal of Psychiatry, 36 , 717–732.

Yanow, D. (2000). Conducting interpretive policy analysis (Vol. 47). Thousand Oaks: Sage University Papers Series on Qualitative Research Methods.

Shenton, A. K. (2004). Strategies for ensuring trustworthiness in qualitative research projects. Education for Information, 22 , 63–75.

van der Geest, S. (2006). Participeren in ziekte en zorg: meer over kwalitatief onderzoek. Huisarts en Wetenschap, 49 (4), 283–287.

Hijmans, E., & Kuyper, M. (2007). Het halfopen interview als onderzoeksmethode. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [The half-open interview as research method (pp. 43–51). Houten: Bohn Stafleu van Loghum.

Jansen, H. (2007). Systematiek en toepassing van de kwalitatieve survey. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Systematics and implementation of the qualitative survey (pp. 27–41). Houten: Bohn Stafleu van Loghum.

Pv, R., & Peremans, L. (2007). Exploreren met focusgroepgesprekken: de ‘stem’ van de groep onder de loep. In L. PLBJ & H. TCo (Eds.), Kwalitatief onderzoek: Praktische methoden voor de medische praktijk . [Exploring with focus group conversations: the “voice” of the group under the magnifying glass (pp. 53–64). Houten: Bohn Stafleu van Loghum.

Carter, N., Bryant-Lukosius, D., DiCenso, A., Blythe, J., & Neville, A. J. (2014). The use of triangulation in qualitative research. Oncology Nursing Forum, 41 (5), 545–547.

Boeije H: Analyseren in kwalitatief onderzoek: Denken en doen, [Analysis in qualitative research: Thinking and doing] vol. Den Haag Boom Lemma uitgevers; 2012.

Hunter, A., & Brewer, J. (2015). Designing Multimethod Research. In S. Hesse-Biber & R. B. Johnson (Eds.), The Oxford Handbook of Multimethod and Mixed Methods Research Inquiry (pp. 185–205). London: Oxford University Press.

Archibald, M. M., Radil, A. I., Zhang, X., & Hanson, W. E. (2015). Current mixed methods practices in qualitative research: A content analysis of leading journals. International Journal of Qualitative Methods, 14 (2), 5–33.

Creswell, J. W., & Plano Clark, V. L. (2011). Choosing a Mixed Methods Design. In Designing and Conducting Mixed Methods Research . Thousand Oaks: SAGE Publications.

Mays, N., & Pope, C. (2000). Assessing quality in qualitative research. BMJ, 320 (7226), 50–52.

O'Brien, B. C., Harris, I. B., Beckman, T. J., Reed, D. A., & Cook, D. A. (2014). Standards for reporting qualitative research: A synthesis of recommendations. Academic Medicine : Journal of the Association of American Medical Colleges, 89 (9), 1245–1251.

Saunders, B., Sim, J., Kingstone, T., Baker, S., Waterfield, J., Bartlam, B., Burroughs, H., & Jinks, C. (2018). Saturation in qualitative research: Exploring its conceptualization and operationalization. Quality and Quantity, 52 (4), 1893–1907.

Moser, A., & Korstjens, I. (2018). Series: Practical guidance to qualitative research. Part 3: Sampling, data collection and analysis. European Journal of General Practice, 24 (1), 9–18.

Marlett, N., Shklarov, S., Marshall, D., Santana, M. J., & Wasylak, T. (2015). Building new roles and relationships in research: A model of patient engagement research. Quality of Life Research : an international journal of quality of life aspects of treatment, care and rehabilitation, 24 (5), 1057–1067.

Demian, M. N., Lam, N. N., Mac-Way, F., Sapir-Pichhadze, R., & Fernandez, N. (2017). Opportunities for engaging patients in kidney research. Canadian Journal of Kidney Health and Disease, 4 , 2054358117703070–2054358117703070.

Noyes, J., McLaughlin, L., Morgan, K., Roberts, A., Stephens, M., Bourne, J., Houlston, M., Houlston, J., Thomas, S., Rhys, R. G., et al. (2019). Designing a co-productive study to overcome known methodological challenges in organ donation research with bereaved family members. Health Expectations . 22(4):824–35.

Piil, K., Jarden, M., & Pii, K. H. (2019). Research agenda for life-threatening cancer. European Journal Cancer Care (Engl), 28 (1), e12935.

Hofmann, D., Ibrahim, F., Rose, D., Scott, D. L., Cope, A., Wykes, T., & Lempp, H. (2015). Expectations of new treatment in rheumatoid arthritis: Developing a patient-generated questionnaire. Health Expectations : an international journal of public participation in health care and health policy, 18 (5), 995–1008.

Jun, M., Manns, B., Laupacis, A., Manns, L., Rehal, B., Crowe, S., & Hemmelgarn, B. R. (2015). Assessing the extent to which current clinical research is consistent with patient priorities: A scoping review using a case study in patients on or nearing dialysis. Canadian Journal of Kidney Health and Disease, 2 , 35.

Elsie Baker, S., & Edwards, R. (2012). How many qualitative interviews is enough? In National Centre for Research Methods Review Paper . National Centre for Research Methods. http://eprints.ncrm.ac.uk/2273/4/how_many_interviews.pdf .

Sandelowski, M. (1995). Sample size in qualitative research. Research in Nursing & Health, 18 (2), 179–183.

Sim, J., Saunders, B., Waterfield, J., & Kingstone, T. (2018). Can sample size in qualitative research be determined a priori? International Journal of Social Research Methodology, 21 (5), 619–634.

Download references

Acknowledgements

no external funding.

Author information

Authors and affiliations.

Department of Neurology, Heidelberg University Hospital, Im Neuenheimer Feld 400, 69120, Heidelberg, Germany

Loraine Busetto, Wolfgang Wick & Christoph Gumbinger

Clinical Cooperation Unit Neuro-Oncology, German Cancer Research Center, Heidelberg, Germany

Wolfgang Wick

You can also search for this author in PubMed   Google Scholar

Contributions

LB drafted the manuscript; WW and CG revised the manuscript; all authors approved the final versions.

Corresponding author

Correspondence to Loraine Busetto .

Ethics declarations

Ethics approval and consent to participate, consent for publication, competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Busetto, L., Wick, W. & Gumbinger, C. How to use and assess qualitative research methods. Neurol. Res. Pract. 2 , 14 (2020). https://doi.org/10.1186/s42466-020-00059-z

Download citation

Received : 30 January 2020

Accepted : 22 April 2020

Published : 27 May 2020

DOI : https://doi.org/10.1186/s42466-020-00059-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Qualitative research
  • Mixed methods
  • Quality assessment

Neurological Research and Practice

ISSN: 2524-3489

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

methods of data analysis for qualitative research

Learn / Guides / Qualitative data analysis guide

Back to guides

5 qualitative data analysis methods

Qualitative data uncovers valuable insights that help you improve the user and customer experience. But how exactly do you measure and analyze data that isn't quantifiable?

There are different qualitative data analysis methods to help you make sense of qualitative feedback and customer insights, depending on your business goals and the type of data you've collected.

Before you choose a qualitative data analysis method for your team, you need to consider the available techniques and explore their use cases to understand how each process might help you better understand your users. 

This guide covers five qualitative analysis methods to choose from, and will help you pick the right one(s) based on your goals. 

Content analysis

Thematic analysis

Narrative analysis

Grounded theory analysis

Discourse analysis

5 qualitative data analysis methods explained

Qualitative data analysis ( QDA ) is the process of organizing, analyzing, and interpreting qualitative research data—non-numeric, conceptual information, and user feedback—to capture themes and patterns, answer research questions, and identify actions to improve your product or website.

Step 1 in the research process (after planning ) is qualitative data collection. You can use behavior analytics software—like Hotjar —to capture qualitative data with context, and learn the real motivation behind user behavior, by collecting written customer feedback with Surveys or scheduling an in-depth user interview with Engage .

Use Hotjar’s tools to collect feedback, uncover behavior trends, and understand the ‘why’ behind user actions.

1. Content analysis

Content analysis is a qualitative research method that examines and quantifies the presence of certain words, subjects, and concepts in text, image, video, or audio messages. The method transforms qualitative input into quantitative data to help you make reliable conclusions about what customers think of your brand, and how you can improve their experience and opinion.

Conduct content analysis manually (which can be time-consuming) or use analysis tools like Lexalytics to reveal communication patterns, uncover differences in individual or group communication trends, and make broader connections between concepts.

#Benefits and challenges of using content analysis

How content analysis can help your team

Content analysis is often used by marketers and customer service specialists, helping them understand customer behavior and measure brand reputation.

For example, you may run a customer survey with open-ended questions to discover users’ concerns—in their own words—about their experience with your product. Instead of having to process hundreds of answers manually, a content analysis tool helps you analyze and group results based on the emotion expressed in texts.

Some other examples of content analysis include:

Analyzing brand mentions on social media to understand your brand's reputation

Reviewing customer feedback to evaluate (and then improve) the customer and user experience (UX)

Researching competitors’ website pages to identify their competitive advantages and value propositions

Interpreting customer interviews and survey results to determine user preferences, and setting the direction for new product or feature developments

Content analysis was a major part of our growth during my time at Hypercontext.

[It gave us] a better understanding of the [blog] topics that performed best for signing new users up. We were also able to go deeper within those blog posts to better understand the formats [that worked].

2. Thematic analysis

Thematic analysis helps you identify, categorize, analyze, and interpret patterns in qualitative study data , and can be done with tools like Dovetail and Thematic .

While content analysis and thematic analysis seem similar, they're different in concept: 

Content analysis can be applied to both qualitative and quantitative data , and focuses on identifying frequencies and recurring words and subjects

Thematic analysis can only be applied to qualitative data, and focuses on identifying patterns and themes

#The benefits and drawbacks of thematic analysis

How thematic analysis can help your team

Thematic analysis can be used by pretty much anyone: from product marketers, to customer relationship managers, to UX researchers.

For example, product teams use thematic analysis to better understand user behaviors and needs and improve UX . Analyzing customer feedback lets you identify themes (e.g. poor navigation or a buggy mobile interface) highlighted by users and get actionable insight into what they really expect from the product. 

💡 Pro tip: looking for a way to expedite the data analysis process for large amounts of data you collected with a survey? Try Hotjar’s AI for Surveys : along with generating a survey based on your goal in seconds, our AI will analyze the raw data and prepare an automated summary report that presents key thematic findings, respondent quotes, and actionable steps to take, making the analysis of qualitative data a breeze.

3. Narrative analysis

Narrative analysis is a method used to interpret research participants’ stories —things like testimonials , case studies, focus groups, interviews, and other text or visual data—with tools like Delve and AI-powered ATLAS.ti .

Some formats don’t work well with narrative analysis, including heavily structured interviews and written surveys, which don’t give participants as much opportunity to tell their stories in their own words.

#Benefits and challenges of narrative analysis

How narrative analysis can help your team

Narrative analysis provides product teams with valuable insight into the complexity of customers’ lives, feelings, and behaviors.

In a marketing research context, narrative analysis involves capturing and reviewing customer stories—on social media, for example—to get in-depth insight into their lives, priorities, and challenges. 

This might look like analyzing daily content shared by your audiences’ favorite influencers on Instagram, or analyzing customer reviews on sites like G2 or Capterra to gain a deep understanding of individual customer experiences. The results of this analysis also contribute to developing corresponding customer personas .

💡 Pro tip: conducting user interviews is an excellent way to collect data for narrative analysis. Though interviews can be time-intensive, there are tools out there that streamline the workload. 

Hotjar Engage automates the entire process, from recruiting to scheduling to generating the all-important interview transcripts you’ll need for the analysis phase of your research project.

4. Grounded theory analysis

Grounded theory analysis is a method of conducting qualitative research to develop theories by examining real-world data. This technique involves the creation of hypotheses and theories through qualitative data collection and evaluation, and can be performed with qualitative data analysis software tools like MAXQDA and NVivo .

Unlike other qualitative data analysis techniques, this method is inductive rather than deductive: it develops theories from data, not the other way around.

#The benefits and challenges of grounded theory analysis

How grounded theory analysis can help your team

Grounded theory analysis is used by software engineers, product marketers, managers, and other specialists who deal with data sets to make informed business decisions. 

For example, product marketing teams may turn to customer surveys to understand the reasons behind high churn rates , then use grounded theory to analyze responses and develop hypotheses about why users churn, and how you can get them to stay. 

Grounded theory can also be helpful in the talent management process. For example, HR representatives may use it to develop theories about low employee engagement, and come up with solutions based on their research findings.

5. Discourse analysis

Discourse analysis is the act of researching the underlying meaning of qualitative data. It involves the observation of texts, audio, and videos to study the relationships between information and its social context.

In contrast to content analysis, this method focuses on the contextual meaning of language: discourse analysis sheds light on what audiences think of a topic, and why they feel the way they do about it.

#Benefits and challenges of discourse analysis

How discourse analysis can help your team

In a business context, this method is primarily used by marketing teams. Discourse analysis helps marketers understand the norms and ideas in their market , and reveals why they play such a significant role for their customers. 

Once the origins of trends are uncovered, it’s easier to develop a company mission, create a unique tone of voice, and craft effective marketing messages.

Which qualitative data analysis method should you choose?

While the five qualitative data analysis methods we list above are all aimed at processing data and answering research questions, these techniques differ in their intent and the approaches applied.  

Choosing the right analysis method for your team isn't a matter of preference—selecting a method that fits is only possible once you define your research goals and have a clear intention. When you know what you need (and why you need it), you can identify an analysis method that aligns with your research objectives.

Gather qualitative data with Hotjar

Use Hotjar’s product experience insights in your qualitative research. Collect feedback, uncover behavior trends, and understand the ‘why’ behind user actions.

FAQs about qualitative data analysis methods

What is the qualitative data analysis approach.

The qualitative data analysis approach refers to the process of systematizing descriptive data collected through interviews, focus groups, surveys, and observations and then interpreting it. The methodology aims to identify patterns and themes behind textual data, and other unquantifiable data, as opposed to numerical data.

What are qualitative data analysis methods?

Five popular qualitative data analysis methods are:

What is the process of qualitative data analysis?

The process of qualitative data analysis includes six steps:

Define your research question

Prepare the data

Choose the method of qualitative analysis

Code the data

Identify themes, patterns, and relationships

Make hypotheses and act

Qualitative data analysis guide

Previous chapter

QDA challenges

Next chapter

  • Privacy Policy

Buy Me a Coffee

Research Method

Home » Qualitative Data – Types, Methods and Examples

Qualitative Data – Types, Methods and Examples

Table of Contents

Qualitative Data

Qualitative Data

Definition:

Qualitative data is a type of data that is collected and analyzed in a non-numerical form, such as words, images, or observations. It is generally used to gain an in-depth understanding of complex phenomena, such as human behavior, attitudes, and beliefs.

Types of Qualitative Data

There are various types of qualitative data that can be collected and analyzed, including:

  • Interviews : These involve in-depth, face-to-face conversations with individuals or groups to gather their perspectives, experiences, and opinions on a particular topic.
  • Focus Groups: These are group discussions where a facilitator leads a discussion on a specific topic, allowing participants to share their views and experiences.
  • Observations : These involve observing and recording the behavior and interactions of individuals or groups in a particular setting.
  • Case Studies: These involve in-depth analysis of a particular individual, group, or organization, usually over an extended period.
  • Document Analysis : This involves examining written or recorded materials, such as newspaper articles, diaries, or public records, to gain insight into a particular topic.
  • Visual Data : This involves analyzing images or videos to understand people’s experiences or perspectives on a particular topic.
  • Online Data: This involves analyzing data collected from social media platforms, forums, or online communities to understand people’s views and opinions on a particular topic.

Qualitative Data Formats

Qualitative data can be collected and presented in various formats. Some common formats include:

  • Textual data: This includes written or transcribed data from interviews, focus groups, or observations. It can be analyzed using various techniques such as thematic analysis or content analysis.
  • Audio data: This includes recordings of interviews or focus groups, which can be transcribed and analyzed using software such as NVivo.
  • Visual data: This includes photographs, videos, or drawings, which can be analyzed using techniques such as visual analysis or semiotics.
  • Mixed media data : This includes data collected in different formats, such as audio and text. This can be analyzed using mixed methods research, which combines both qualitative and quantitative research methods.
  • Field notes: These are notes taken by researchers during observations, which can include descriptions of the setting, behaviors, and interactions of participants.

Qualitative Data Analysis Methods

Qualitative data analysis refers to the process of systematically analyzing and interpreting qualitative data to identify patterns, themes, and relationships. Here are some common methods of analyzing qualitative data:

  • Thematic analysis: This involves identifying and analyzing patterns or themes within the data. It involves coding the data into themes and subthemes and organizing them into a coherent narrative.
  • Content analysis: This involves analyzing the content of the data, such as the words, phrases, or images used. It involves identifying patterns and themes in the data and examining the relationships between them.
  • Discourse analysis: This involves analyzing the language and communication used in the data, such as the meaning behind certain words or phrases. It involves examining how the language constructs and shapes social reality.
  • Grounded theory: This involves developing a theory or framework based on the data. It involves identifying patterns and themes in the data and using them to develop a theory that explains the phenomenon being studied.
  • Narrative analysis : This involves analyzing the stories and narratives present in the data. It involves examining how the stories are constructed and how they contribute to the overall understanding of the phenomenon being studied.
  • Ethnographic analysis : This involves analyzing the culture and social practices present in the data. It involves examining how the cultural and social practices contribute to the phenomenon being studied.

Qualitative Data Collection Guide

Here are some steps to guide the collection of qualitative data:

  • Define the research question : Start by clearly defining the research question that you want to answer. This will guide the selection of data collection methods and help to ensure that the data collected is relevant to the research question.
  • Choose data collection methods : Select the most appropriate data collection methods based on the research question, the research design, and the resources available. Common methods include interviews, focus groups, observations, document analysis, and participatory research.
  • Develop a data collection plan : Develop a plan for data collection that outlines the specific procedures, timelines, and resources needed for each data collection method. This plan should include details such as how to recruit participants, how to conduct interviews or focus groups, and how to record and store data.
  • Obtain ethical approval : Obtain ethical approval from an institutional review board or ethics committee before beginning data collection. This is particularly important when working with human participants to ensure that their rights and interests are protected.
  • Recruit participants: Recruit participants based on the research question and the data collection methods chosen. This may involve purposive sampling, snowball sampling, or random sampling.
  • Collect data: Collect data using the chosen data collection methods. This may involve conducting interviews, facilitating focus groups, observing participants, or analyzing documents.
  • Transcribe and store data : Transcribe and store the data in a secure location. This may involve transcribing audio or video recordings, organizing field notes, or scanning documents.
  • Analyze data: Analyze the data using appropriate qualitative data analysis methods, such as thematic analysis or content analysis.
  • I nterpret findings : Interpret the findings of the data analysis in the context of the research question and the relevant literature. This may involve developing new theories or frameworks, or validating existing ones.
  • Communicate results: Communicate the results of the research in a clear and concise manner, using appropriate language and visual aids where necessary. This may involve writing a report, presenting at a conference, or publishing in a peer-reviewed journal.

Qualitative Data Examples

Some examples of qualitative data in different fields are as follows:

  • Sociology : In sociology, qualitative data is used to study social phenomena such as culture, norms, and social relationships. For example, a researcher might conduct interviews with members of a community to understand their beliefs and practices.
  • Psychology : In psychology, qualitative data is used to study human behavior, emotions, and attitudes. For example, a researcher might conduct a focus group to explore how individuals with anxiety cope with their symptoms.
  • Education : In education, qualitative data is used to study learning processes and educational outcomes. For example, a researcher might conduct observations in a classroom to understand how students interact with each other and with their teacher.
  • Marketing : In marketing, qualitative data is used to understand consumer behavior and preferences. For example, a researcher might conduct in-depth interviews with customers to understand their purchasing decisions.
  • Anthropology : In anthropology, qualitative data is used to study human cultures and societies. For example, a researcher might conduct participant observation in a remote community to understand their customs and traditions.
  • Health Sciences: In health sciences, qualitative data is used to study patient experiences, beliefs, and preferences. For example, a researcher might conduct interviews with cancer patients to understand how they cope with their illness.

Application of Qualitative Data

Qualitative data is used in a variety of fields and has numerous applications. Here are some common applications of qualitative data:

  • Exploratory research: Qualitative data is often used in exploratory research to understand a new or unfamiliar topic. Researchers use qualitative data to generate hypotheses and develop a deeper understanding of the research question.
  • Evaluation: Qualitative data is often used to evaluate programs or interventions. Researchers use qualitative data to understand the impact of a program or intervention on the people who participate in it.
  • Needs assessment: Qualitative data is often used in needs assessments to understand the needs of a specific population. Researchers use qualitative data to identify the most pressing needs of the population and develop strategies to address those needs.
  • Case studies: Qualitative data is often used in case studies to understand a particular case in detail. Researchers use qualitative data to understand the context, experiences, and perspectives of the people involved in the case.
  • Market research: Qualitative data is often used in market research to understand consumer behavior and preferences. Researchers use qualitative data to gain insights into consumer attitudes, opinions, and motivations.
  • Social and cultural research : Qualitative data is often used in social and cultural research to understand social phenomena such as culture, norms, and social relationships. Researchers use qualitative data to understand the experiences, beliefs, and practices of individuals and communities.

Purpose of Qualitative Data

The purpose of qualitative data is to gain a deeper understanding of social phenomena that cannot be captured by numerical or quantitative data. Qualitative data is collected through methods such as observation, interviews, and focus groups, and it provides descriptive information that can shed light on people’s experiences, beliefs, attitudes, and behaviors.

Qualitative data serves several purposes, including:

  • Generating hypotheses: Qualitative data can be used to generate hypotheses about social phenomena that can be further tested with quantitative data.
  • Providing context : Qualitative data provides a rich and detailed context for understanding social phenomena that cannot be captured by numerical data alone.
  • Exploring complex phenomena : Qualitative data can be used to explore complex phenomena such as culture, social relationships, and the experiences of marginalized groups.
  • Evaluating programs and intervention s: Qualitative data can be used to evaluate the impact of programs and interventions on the people who participate in them.
  • Enhancing understanding: Qualitative data can be used to enhance understanding of the experiences, beliefs, and attitudes of individuals and communities, which can inform policy and practice.

When to use Qualitative Data

Qualitative data is appropriate when the research question requires an in-depth understanding of complex social phenomena that cannot be captured by numerical or quantitative data.

Here are some situations when qualitative data is appropriate:

  • Exploratory research : Qualitative data is often used in exploratory research to generate hypotheses and develop a deeper understanding of a research question.
  • Understanding social phenomena : Qualitative data is appropriate when the research question requires an in-depth understanding of social phenomena such as culture, social relationships, and experiences of marginalized groups.
  • Program evaluation: Qualitative data is often used in program evaluation to understand the impact of a program on the people who participate in it.
  • Needs assessment: Qualitative data is often used in needs assessments to understand the needs of a specific population.
  • Market research: Qualitative data is often used in market research to understand consumer behavior and preferences.
  • Case studies: Qualitative data is often used in case studies to understand a particular case in detail.

Characteristics of Qualitative Data

Here are some characteristics of qualitative data:

  • Descriptive : Qualitative data provides a rich and detailed description of the social phenomena under investigation.
  • Contextual : Qualitative data is collected in the context in which the social phenomena occur, which allows for a deeper understanding of the phenomena.
  • Subjective : Qualitative data reflects the subjective experiences, beliefs, attitudes, and behaviors of the individuals and communities under investigation.
  • Flexible : Qualitative data collection methods are flexible and can be adapted to the specific needs of the research question.
  • Emergent : Qualitative data analysis is often an iterative process, where new themes and patterns emerge as the data is analyzed.
  • Interpretive : Qualitative data analysis involves interpretation of the data, which requires the researcher to be reflexive and aware of their own biases and assumptions.
  • Non-standardized: Qualitative data collection methods are often non-standardized, which means that the data is not collected in a standardized or uniform way.

Advantages of Qualitative Data

Some advantages of qualitative data are as follows:

  • Richness : Qualitative data provides a rich and detailed description of the social phenomena under investigation, allowing for a deeper understanding of the phenomena.
  • Flexibility : Qualitative data collection methods are flexible and can be adapted to the specific needs of the research question, allowing for a more nuanced exploration of social phenomena.
  • Contextualization : Qualitative data is collected in the context in which the social phenomena occur, which allows for a deeper understanding of the phenomena and their cultural and social context.
  • Subjectivity : Qualitative data reflects the subjective experiences, beliefs, attitudes, and behaviors of the individuals and communities under investigation, allowing for a more holistic understanding of the phenomena.
  • New insights : Qualitative data can generate new insights and hypotheses that can be further tested with quantitative data.
  • Participant voice : Qualitative data collection methods often involve direct participation by the individuals and communities under investigation, allowing for their voices to be heard.
  • Ethical considerations: Qualitative data collection methods often prioritize ethical considerations such as informed consent, confidentiality, and respect for the autonomy of the participants.

Limitations of Qualitative Data

Here are some limitations of qualitative data:

  • Subjectivity : Qualitative data is subjective, and the interpretation of the data depends on the researcher’s own biases, assumptions, and perspectives.
  • Small sample size: Qualitative data collection methods often involve a small sample size, which limits the generalizability of the findings.
  • Time-consuming: Qualitative data collection and analysis can be time-consuming, as it requires in-depth engagement with the data and often involves iterative processes.
  • Limited statistical analysis: Qualitative data is often not suitable for statistical analysis, which limits the ability to draw quantitative conclusions from the data.
  • Limited comparability: Qualitative data collection methods are often non-standardized, which makes it difficult to compare findings across different studies or contexts.
  • Social desirability bias : Qualitative data collection methods often rely on self-reporting by the participants, which can be influenced by social desirability bias.
  • Researcher bias: The researcher’s own biases, assumptions, and perspectives can influence the data collection and analysis, which can limit the objectivity of the findings.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Secondary Data

Secondary Data – Types, Methods and Examples

Research Information

Information in Research – Types and Examples

Primary Data

Primary Data – Types, Methods and Examples

Research Data

Research Data – Types Methods and Examples

Quantitative Data

Quantitative Data – Types, Methods and Examples

Qualitative vs Quantitative Research Methods & Data Analysis

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

What is the difference between quantitative and qualitative?

The main difference between quantitative and qualitative research is the type of data they collect and analyze.

Quantitative research collects numerical data and analyzes it using statistical methods. The aim is to produce objective, empirical data that can be measured and expressed in numerical terms. Quantitative research is often used to test hypotheses, identify patterns, and make predictions.

Qualitative research, on the other hand, collects non-numerical data such as words, images, and sounds. The focus is on exploring subjective experiences, opinions, and attitudes, often through observation and interviews.

Qualitative research aims to produce rich and detailed descriptions of the phenomenon being studied, and to uncover new insights and meanings.

Quantitative data is information about quantities, and therefore numbers, and qualitative data is descriptive, and regards phenomenon which can be observed but not measured, such as language.

What Is Qualitative Research?

Qualitative research is the process of collecting, analyzing, and interpreting non-numerical data, such as language. Qualitative research can be used to understand how an individual subjectively perceives and gives meaning to their social reality.

Qualitative data is non-numerical data, such as text, video, photographs, or audio recordings. This type of data can be collected using diary accounts or in-depth interviews and analyzed using grounded theory or thematic analysis.

Qualitative research is multimethod in focus, involving an interpretive, naturalistic approach to its subject matter. This means that qualitative researchers study things in their natural settings, attempting to make sense of, or interpret, phenomena in terms of the meanings people bring to them. Denzin and Lincoln (1994, p. 2)

Interest in qualitative data came about as the result of the dissatisfaction of some psychologists (e.g., Carl Rogers) with the scientific study of psychologists such as behaviorists (e.g., Skinner ).

Since psychologists study people, the traditional approach to science is not seen as an appropriate way of carrying out research since it fails to capture the totality of human experience and the essence of being human.  Exploring participants’ experiences is known as a phenomenological approach (re: Humanism ).

Qualitative research is primarily concerned with meaning, subjectivity, and lived experience. The goal is to understand the quality and texture of people’s experiences, how they make sense of them, and the implications for their lives.

Qualitative research aims to understand the social reality of individuals, groups, and cultures as nearly as possible as participants feel or live it. Thus, people and groups are studied in their natural setting.

Some examples of qualitative research questions are provided, such as what an experience feels like, how people talk about something, how they make sense of an experience, and how events unfold for people.

Research following a qualitative approach is exploratory and seeks to explain ‘how’ and ‘why’ a particular phenomenon, or behavior, operates as it does in a particular context. It can be used to generate hypotheses and theories from the data.

Qualitative Methods

There are different types of qualitative research methods, including diary accounts, in-depth interviews , documents, focus groups , case study research , and ethnography.

The results of qualitative methods provide a deep understanding of how people perceive their social realities and in consequence, how they act within the social world.

The researcher has several methods for collecting empirical materials, ranging from the interview to direct observation, to the analysis of artifacts, documents, and cultural records, to the use of visual materials or personal experience. Denzin and Lincoln (1994, p. 14)

Here are some examples of qualitative data:

Interview transcripts : Verbatim records of what participants said during an interview or focus group. They allow researchers to identify common themes and patterns, and draw conclusions based on the data. Interview transcripts can also be useful in providing direct quotes and examples to support research findings.

Observations : The researcher typically takes detailed notes on what they observe, including any contextual information, nonverbal cues, or other relevant details. The resulting observational data can be analyzed to gain insights into social phenomena, such as human behavior, social interactions, and cultural practices.

Unstructured interviews : generate qualitative data through the use of open questions.  This allows the respondent to talk in some depth, choosing their own words.  This helps the researcher develop a real sense of a person’s understanding of a situation.

Diaries or journals : Written accounts of personal experiences or reflections.

Notice that qualitative data could be much more than just words or text. Photographs, videos, sound recordings, and so on, can be considered qualitative data. Visual data can be used to understand behaviors, environments, and social interactions.

Qualitative Data Analysis

Qualitative research is endlessly creative and interpretive. The researcher does not just leave the field with mountains of empirical data and then easily write up his or her findings.

Qualitative interpretations are constructed, and various techniques can be used to make sense of the data, such as content analysis, grounded theory (Glaser & Strauss, 1967), thematic analysis (Braun & Clarke, 2006), or discourse analysis.

For example, thematic analysis is a qualitative approach that involves identifying implicit or explicit ideas within the data. Themes will often emerge once the data has been coded.

RESEARCH THEMATICANALYSISMETHOD

Key Features

  • Events can be understood adequately only if they are seen in context. Therefore, a qualitative researcher immerses her/himself in the field, in natural surroundings. The contexts of inquiry are not contrived; they are natural. Nothing is predefined or taken for granted.
  • Qualitative researchers want those who are studied to speak for themselves, to provide their perspectives in words and other actions. Therefore, qualitative research is an interactive process in which the persons studied teach the researcher about their lives.
  • The qualitative researcher is an integral part of the data; without the active participation of the researcher, no data exists.
  • The study’s design evolves during the research and can be adjusted or changed as it progresses. For the qualitative researcher, there is no single reality. It is subjective and exists only in reference to the observer.
  • The theory is data-driven and emerges as part of the research process, evolving from the data as they are collected.

Limitations of Qualitative Research

  • Because of the time and costs involved, qualitative designs do not generally draw samples from large-scale data sets.
  • The problem of adequate validity or reliability is a major criticism. Because of the subjective nature of qualitative data and its origin in single contexts, it is difficult to apply conventional standards of reliability and validity. For example, because of the central role played by the researcher in the generation of data, it is not possible to replicate qualitative studies.
  • Also, contexts, situations, events, conditions, and interactions cannot be replicated to any extent, nor can generalizations be made to a wider context than the one studied with confidence.
  • The time required for data collection, analysis, and interpretation is lengthy. Analysis of qualitative data is difficult, and expert knowledge of an area is necessary to interpret qualitative data. Great care must be taken when doing so, for example, looking for mental illness symptoms.

Advantages of Qualitative Research

  • Because of close researcher involvement, the researcher gains an insider’s view of the field. This allows the researcher to find issues that are often missed (such as subtleties and complexities) by the scientific, more positivistic inquiries.
  • Qualitative descriptions can be important in suggesting possible relationships, causes, effects, and dynamic processes.
  • Qualitative analysis allows for ambiguities/contradictions in the data, which reflect social reality (Denscombe, 2010).
  • Qualitative research uses a descriptive, narrative style; this research might be of particular benefit to the practitioner as she or he could turn to qualitative reports to examine forms of knowledge that might otherwise be unavailable, thereby gaining new insight.

What Is Quantitative Research?

Quantitative research involves the process of objectively collecting and analyzing numerical data to describe, predict, or control variables of interest.

The goals of quantitative research are to test causal relationships between variables , make predictions, and generalize results to wider populations.

Quantitative researchers aim to establish general laws of behavior and phenomenon across different settings/contexts. Research is used to test a theory and ultimately support or reject it.

Quantitative Methods

Experiments typically yield quantitative data, as they are concerned with measuring things.  However, other research methods, such as controlled observations and questionnaires , can produce both quantitative information.

For example, a rating scale or closed questions on a questionnaire would generate quantitative data as these produce either numerical data or data that can be put into categories (e.g., “yes,” “no” answers).

Experimental methods limit how research participants react to and express appropriate social behavior.

Findings are, therefore, likely to be context-bound and simply a reflection of the assumptions that the researcher brings to the investigation.

There are numerous examples of quantitative data in psychological research, including mental health. Here are a few examples:

Another example is the Experience in Close Relationships Scale (ECR), a self-report questionnaire widely used to assess adult attachment styles .

The ECR provides quantitative data that can be used to assess attachment styles and predict relationship outcomes.

Neuroimaging data : Neuroimaging techniques, such as MRI and fMRI, provide quantitative data on brain structure and function.

This data can be analyzed to identify brain regions involved in specific mental processes or disorders.

For example, the Beck Depression Inventory (BDI) is a clinician-administered questionnaire widely used to assess the severity of depressive symptoms in individuals.

The BDI consists of 21 questions, each scored on a scale of 0 to 3, with higher scores indicating more severe depressive symptoms. 

Quantitative Data Analysis

Statistics help us turn quantitative data into useful information to help with decision-making. We can use statistics to summarize our data, describing patterns, relationships, and connections. Statistics can be descriptive or inferential.

Descriptive statistics help us to summarize our data. In contrast, inferential statistics are used to identify statistically significant differences between groups of data (such as intervention and control groups in a randomized control study).

  • Quantitative researchers try to control extraneous variables by conducting their studies in the lab.
  • The research aims for objectivity (i.e., without bias) and is separated from the data.
  • The design of the study is determined before it begins.
  • For the quantitative researcher, the reality is objective, exists separately from the researcher, and can be seen by anyone.
  • Research is used to test a theory and ultimately support or reject it.

Limitations of Quantitative Research

  • Context: Quantitative experiments do not take place in natural settings. In addition, they do not allow participants to explain their choices or the meaning of the questions they may have for those participants (Carr, 1994).
  • Researcher expertise: Poor knowledge of the application of statistical analysis may negatively affect analysis and subsequent interpretation (Black, 1999).
  • Variability of data quantity: Large sample sizes are needed for more accurate analysis. Small-scale quantitative studies may be less reliable because of the low quantity of data (Denscombe, 2010). This also affects the ability to generalize study findings to wider populations.
  • Confirmation bias: The researcher might miss observing phenomena because of focus on theory or hypothesis testing rather than on the theory of hypothesis generation.

Advantages of Quantitative Research

  • Scientific objectivity: Quantitative data can be interpreted with statistical analysis, and since statistics are based on the principles of mathematics, the quantitative approach is viewed as scientifically objective and rational (Carr, 1994; Denscombe, 2010).
  • Useful for testing and validating already constructed theories.
  • Rapid analysis: Sophisticated software removes much of the need for prolonged data analysis, especially with large volumes of data involved (Antonius, 2003).
  • Replication: Quantitative data is based on measured values and can be checked by others because numerical data is less open to ambiguities of interpretation.
  • Hypotheses can also be tested because of statistical analysis (Antonius, 2003).

Antonius, R. (2003). Interpreting quantitative data with SPSS . Sage.

Black, T. R. (1999). Doing quantitative research in the social sciences: An integrated approach to research design, measurement and statistics . Sage.

Braun, V. & Clarke, V. (2006). Using thematic analysis in psychology . Qualitative Research in Psychology , 3, 77–101.

Carr, L. T. (1994). The strengths and weaknesses of quantitative and qualitative research : what method for nursing? Journal of advanced nursing, 20(4) , 716-721.

Denscombe, M. (2010). The Good Research Guide: for small-scale social research. McGraw Hill.

Denzin, N., & Lincoln. Y. (1994). Handbook of Qualitative Research. Thousand Oaks, CA, US: Sage Publications Inc.

Glaser, B. G., Strauss, A. L., & Strutzel, E. (1968). The discovery of grounded theory; strategies for qualitative research. Nursing research, 17(4) , 364.

Minichiello, V. (1990). In-Depth Interviewing: Researching People. Longman Cheshire.

Punch, K. (1998). Introduction to Social Research: Quantitative and Qualitative Approaches. London: Sage

Further Information

  • Designing qualitative research
  • Methods of data collection and analysis
  • Introduction to quantitative and qualitative research
  • Checklists for improving rigour in qualitative research: a case of the tail wagging the dog?
  • Qualitative research in health care: Analysing qualitative data
  • Qualitative data analysis: the framework approach
  • Using the framework method for the analysis of
  • Qualitative data in multi-disciplinary health research
  • Content Analysis
  • Grounded Theory
  • Thematic Analysis

methods of data analysis for qualitative research

  • Open access
  • Published: 25 March 2024

Barriers and enabling factors for utilizing physical rehabilitation services by Afghan immigrants and refugees with disabilities in Iran: a qualitative study

  • Elaheh Amini 1 ,
  • Manal Etemadi 2 , 3 ,
  • Saeed Shahabi 4 ,
  • Cornelia Anne Barth 5 ,
  • Farzaneh Honarmandi 4 ,
  • Marzieh Karami Rad 4 , 6 &
  • Kamran Bagheri Lankarani 4  

BMC Public Health volume  24 , Article number:  893 ( 2024 ) Cite this article

Metrics details

Introduction

Individuals with a migrant background often underutilize physical rehabilitation services (PRS) compared to the host population. This disparity is attributed to various barriers, including limited access to information, language barriers, illiteracy, and cultural factors. To improve PRS utilization by Afghan immigrants and refugees in Iran, it is crucial to identify these barriers and enabling factors. In response, this study explored the barriers and enabling factors for utilizing PRS among Afghan immigrants and refugees with disabilities in Iran.

This qualitative study was conducted in Iran between January and March 2023. Participants were selected through convenient and snowball sampling. Individual, semi-structured interviews were carried out both in face-to-face and online formats. Data analysis occurred concurrently with data collection, using the directed content analysis approach.

Findings from our research indicate that common barriers to PRS utilization among Afghan immigrants and refugees include insufficient insurance coverage, high service costs, expensive transportation and accommodation, limited knowledge about Iran’s health system, inadequate awareness of available supports, restricted access to PRS in remote areas, impatience among PRS providers, fear of arrest and deportation, a lack of trust in modern treatments, stringent immigration rules, high inflation rates limiting the ability to pay for PRS, and limited social support. On the other hand, several enabling factors were identified, such as strengthening insurance coverage, utilizing the capacities of charities and NGOs, providing information about available services, promoting respectful behavior by healthcare providers towards patients, facilitating cultural integration, and increasing immigrants’ awareness of available services and eligibility criteria.

The barriers and enabling factors uncovered in this study offer valuable insights into the complexities surrounding PRS utilization by Afghan immigrants and refugees with disabilities in Iran. Understanding and addressing these factors is essential for developing targeted interventions and policies that can improve access and utilization, ultimately leading to enhanced health outcomes for this vulnerable population.

Peer Review reports

The movement of people across country borders affects the host country in varies ways, influencing labor markets, productivity, innovation, demographic structure, fiscal balance, and criminality [ 1 ]. Population mobility serves as a cornerstone for meeting the labor and economic demands for human capital. It also helps mitigate the social, demographic, and economic impacts of aging populations in many economically advanced nations where increased migration is required to sustain labor markets and population growth [ 2 , 3 ]. Notably, a percentage of refugees can bring valuable skills to the labor market. The IMF estimated that 21% of Syrian asylum seekers who arrived in Germany between 2013 and 2014 reported having a tertiary education [ 4 ]. Similarly, immigrants contribute significantly to healthcare occupation, with 15.6% in healthcare practitioners and technical roles and 22.4% in healthcare support positions in the US [ 5 ].

Conflict and displacement can increase the risk of disability either directly, caused by war-related trauma and injuries or indirectly through the breakdown of health systems [ 6 ]. These risks may be especially common in situations of displacement where there can be varying levels of access to health care in host countries, which causes and/or further exacerbates disability [ 7 ]. The World Health Organization (WHO) estimates that 2.4 billion people have health conditions that would benefit from rehabilitation services [ 8 ]. UNHCR estimates that there will be 70.8 million people forcibly displaced worldwide by the end of 2018 [ 9 ]. Accordingly, almost 7.65 million people with disabilities would face forced displacement [ 10 ]. Refugees and asylum seekers with disabilities are confronted with multiple and intersecting forms of discrimination and have worse health outcomes [ 11 ].

Studies estimate that one in six refugees has a physical health problem severely affecting their lives, and two-thirds experience mental health problems [ 12 ]. There is a high prevalence of non-communicable diseases (NCDs) in the refugee population, resulting in high levels of vulnerability and long-term health complications [ 13 , 14 ]. Many refugees also have pre-existing disabilities and chronic health conditions. These conditions underpin the important role of physical rehabilitation services (PRS) [ 15 ]. Immigrants and refugees with disabilities do not have equitable access to resettlement opportunities, and some countries put restrictions on the immigration of individuals with intellectual disabilities, partially due to the cost of needed educational and health services. These refugees face more marginalization and cumulative disadvantage [ 16 ]. Consequently, people from refugee backgrounds with disabilities are likely to simultaneously encounter the dual disadvantages associated with being both a refugee and a person with a disability [ 17 ].

People with a migrant background are less likely to access PRS compared to those without, possibly due to barriers such as lack of information, language problems, illiteracy, and cultural aspects [ 18 , 19 , 20 , 21 ]. Additionally, health insurance coverage has been found to be a potential predictor of immigrant health service utilization, and low income appears to be a barrier to the use of secondary care [ 22 ]. Furthermore, transport, stigma and discrimination, and financial and social isolation are among the common barriers to accessing services among immigrants’ families with disabilities [ 23 , 24 ]. Importantly, people with migrant backgrounds and foreign nationals faced more barriers to accessing care and poorer rehabilitation outcomes due to a lack of culture- or gender-sensitive treatment concepts among health care providers [ 25 ].

In Bangladesh, the lack of PRS and inclusive programs, along with a difficult and inaccessible landscape and the stigmatization of people with physical, mental, and psychological disabilities, make it hard for people with disabilities (PwD) to use their basic human rights and fully participate in their community [ 26 ]. Refugees from Syria with disabilities living in Turkey face particular challenges, including stigma, in securing income-generating opportunities [ 27 ]. Also, Norredam et al. (2010) concluded that problems of access for migrants may be related to formal and informal barriers. User fees, organizational barriers, and a lack of referral between services are just a few examples of the elements that make up formal barriers in the health system. Other factors include legal restrictions on access for specific groups. On the other hand, language, communication, socio-cultural factors, and ‘newness’ may lead to informal barriers affecting migrants’ utilization patterns [ 28 ]. These studies have enhanced the understanding of the challenges faced by PwD with a migrant background in accessing essential services, with a particular focus on PRS.

In order to facilitate the utilization of PRS by Afghan immigrants and refugees in Iran, the identification of barriers and enabling factors is crucial. This information can inform national and international organizations and authorities on effective strategies to ensure immigrants’ access. Notably, it is worth acknowledging that, to the best of our knowledge, there has been a notable gap in existing research regarding the utilization of PRS by Afghan immigrants and refugees with disabilities in Iran. In light of this knowledge gap, our research team initiated a qualitative study aimed at identifying the barriers and enabling factors for this vulnerable group to utilize PRS.

This qualitative study was carried out in Iran from January to March 2023, with the participation of immigrants and refugees in need of rehabilitation and also rehabilitation service providers. “Standards for Reporting Qualitative Research (SRQR)” [ 29 ] guideline and “Critical Appraisal Skills Programme (CASP) Qualitative Checklist” [ 30 ] were considered to enhance the reporting and methodological quality. The institutional review board of Shiraz University of Medical Sciences (No. 27,348) approved the study’s protocol.

Sampling and recruiting approach

In order to select participants, both convenient and snowball sampling methods were used. The interviewer (a female with a Ph.D. in physiotherapy) prepared a list of immigrants and refugees in need of PRS through communication with the centers that provide PRS to immigrants in Tehran, Isfahan, Fars, Sistan and Baluchistan, and Khorasan Razavi. She also prepared a list of PRS providers, including physiotherapists, occupational therapists, prosthesists/orthotists, audiologists, and optometrists, who served needed immigrants in these centers. For maximum diversity of participants, the research team selected participants from different places of residence, age groups, gender, education level, and occupation. In addition, interviewees were asked to introduce other potential individuals that could provide valuable information. Up until the participants came up with no new findings, sampling and interviewing continued. In order to ensure this achievement, the last three interviews with repeated findings were considered indicators of it [ 31 ]. After receiving the initial consent of the individuals to participate in the study, an informed consent form containing the general information of the research team as well as the research objectives was sent to them via email or instant messaging applications. Also, in this form, the participants were guaranteed that their identity would remain anonymous throughout the study and that they were free to withdraw from the study at any stage.

Data collection

The first author conducted individual, semi-structured interviews in both face-to-face and online settings. An effort was made to conduct the interviews in a calm environment without the presence of a third party. Before initiating each interview, the interviewer again explained the general information and project objectives to the interviewee, and after obtaining her or his consent, the meeting started. During the interview sessions, an interview guide, including open questions prepared with the participation of all team members, was used (Table  1 ). Meanwhile, the questions in this interview guide were revised based on the feedback received from the initial interviews, with the aim of greater clarity. In order to facilitate the analysis of the collected qualitative data, the interviewer took notes during the interview sessions in addition to recording the sessions. At the end of each interview, the recorded file was written and saved in Office Word software.

Data analysis

The data analysis process was done simultaneously with the data collection and using the directed content analysis approach [ 32 ]. The components of the Immigrant Health Service Utilization (IHSU) Framework (Fig.  1 ), which was developed to indicate the special health utilization situation of immigrants [ 22 ], were considered pre-defined themes. This framework describes disparities in healthcare utilization among immigrants by need for healthcare, resources (enabling factors), predisposing factors, and macro-structural/contextual factors at both the immigrant-specific and general levels. Three members of the research team participated in the data analysis process. The transcribed texts were repeatedly reviewed, and the coding of the identified meaning units was done with the aim of developing the codebook. Then, the explored codes were synthesized and assigned inductively to the four main components of the adopted framework: (1) need for health care; (2) resources; (3) predisposing factors; and (4) macro-structural/contextual factors. As shown in Fig.  1 , each of these components can be divided into general factors and immigrant-specific factors. Any differences of opinion among the authors were resolved at this stage through discussion and the participation of the expert author. Data analysis was conducted applying MAXQDA software (VERBI GmbH, Berlin, Germany).

figure 1

An analytical framework for immigrant health service utilization (Adopted from: Yang PQ, Hwang SH: Explaining immigrant health service utilization: a theoretical framework. Sage Open 2016, 6(2).)

Rigor and trustworthiness strategies

To certify the rigor and trustworthiness of qualitative studies, a number of strategies are applied to enhance the authenticity, credibility, transferability, confirmability, and dependability of the findings [ 33 ]. In the current study, the research team considered several strategies to ensure rigor and trustworthiness: (a) inserting direct codes from almost all of the participants (authenticity); (b) immersing the authors in the study for the long term and member-checking by relevant experts (credibility); (c) selecting the participants with the highest diversity (transferability); (d) checking the final findings by participants (confirmability); and (e) involving authors with different scientific and executive backgrounds in the data analysis process (dependability).

Ethical considerations

The ethical committee of the Shiraz University of Medical Sciences approved the study (IR.SUMS.REC.1401.674). Individuals were knowledgeable that their involvement in this study was voluntary and that they could give up the study at any time. An informed consent form was received from participants before each interview session.

In total, 28 individuals, including 19 service users and nine service providers, participated in this study. The characteristics of the participants are shown in Table  2 . In the following, the views of the participants regarding the factors leading to the increase in the demand for PRS by immigrants and refugees with disabilities are presented first, and then the common barriers and enabling factors of utilizing PRS by this group are presented (Tables  3 and 4 ).

Need for physical rehabilitation services

The disastrous social and economic situation in Afghanistan has led to a significant population of Afghans without access to timely and quality health services. Therefore, many of them are affected by conditions and injuries that lead to permanent physical disabilities.

“The inefficiency of the healthcare system in Afghanistan has led to the persistence of certain diseases like cerebral palsy, resulting in long-term physical disability.” [SP6] .

A significant share of the participants stated that the decade-long protracted conflict in Afghanistan has led to a high prevalence of physical disabilities.

“Due to bombings, mine explosions, killings, and other recent civil wars in Afghanistan, the number of cases with physical disability has considerably increased.” [S01] .

In addition, due to the high rate of consanguineous marriages, malnutrition, and unsafe childbirth in Afghanistan, cases of disability among children are significant.

“In children, due to consanguineous marriages, improper nutrition during the mother’s pregnancy, unsafe delivery, and other such cases, it brings disorders to babies and harms them.” [SP3] .

Another possible factor in increasing the need for PRS among Afghan immigrants is accidents and injuries on the way to Iran.

“Many immigrants with disabilities are injured along the way due to hardships and accidents, many of whom require PRS.” [SP5] .

Most Afghan immigrants and refugees in Iran are engaged in construction and heavy work, which has led to a high prevalence of musculoskeletal disorders among them.

“Work injuries (including electrocution, burns, and amputation with a device during work) are among the factors that create disability among immigrants.” [SP3] .

Barriers in utilizing physical rehabilitation services

In almost all the interviews, the participants mentioned the challenges related to financial resources as one of the most important factors in the lack of sufficient benefit from PRS in Iran. In this regard, inadequacy of insurance coverage, high costs of services, and high costs of transportation and accommodation were the main financial barriers.

“Due to the lack of insurance, the need that our child has had since childhood, and the increase in prices, it is difficult to receive services, and sometimes there is a gap.” [S02] .

Immigrants live scattered geographically and are therefore lacking a social network, which would allow those with disabilities to exchange their experiences.

“Many immigrants live in remote and marginalized areas. Therefore, many PwD do not know each other and cannot benefit from each other’s experiences.” [SP2] .

Another common challenge to benefiting from PRS among immigrants with disabilities in Iran was the lack of access to accurate and user-friendly information about treatment and rehabilitation processes. The participants believed that the lack of knowledge about the structure of Iran’s health system, immigrant legislation, and support, especially for immigrants with disabilities, had affected access to services.

“We don’t have insurance. We don’t know where to go to get services!” [S12] . “Because we don’t know much about the laws related to immigrants in Iran, it takes a long time to be able to access the services.” [S15] .

In addition, since many immigrants with disabilities do not have sufficient knowledge of the effects of physical rehabilitation interventions, they are not very willing to receive and adhere to these services.

“Many clients do not believe in PRS. They prefer to receive medical treatment rather than rehabilitative interventions.” [SP5] .

Although the language of Afghan immigrants is the same as that of Iranians, some of them speak local dialects such as Pashto, which makes communication between them and the medical staff difficult. One of the PRS providers mentioned this issue:

“A limited number, especially those who are Pashto, are facing language problems.” [SP4] .

During the interview sessions, the participants always mentioned the unavailability of PRS in remote areas and the lack of experienced human resources in these areas. Also, participants mentioned the low quality of PRS in some remote areas.

“PRS in Iran are mainly located in the centers of provinces and densely populated cities. Therefore, it is very difficult to access these services in remote areas. In addition, the quality of service is not very acceptable in deprived areas!” [SP1] .

Such a situation has caused patients to be referred late, and the follow-up of their treatment and rehabilitation process is not very favorable. One of the participants stated:

“The lack of PRS in our city has made us go to get these services on time. Even when we receive these services in other cities, it is difficult to follow our treatment process.” [S15] .

Predisposing factors

The participants believed that gender as a predisposing factor affects the utilization of immigrants with disabilities in PRS.

“Many Afghan women are reluctant to be evaluated and rehabilitated by a male therapist.” [SP2] .

Moreover, the financial ability and capacity of Afghan families in Iran to pay for PRS have significantly declined as a result of their high household sizes.

“We Afghans often have large families. This issue has caused us to not have enough financial ability to receive services due to our limited income.” [S7] .

Regarding socioeconomic status, a major proportion of individuals talked about the high rate of poverty and low education level among immigrants, especially disabled immigrants.

“Many of our clients are at risk of poverty. Well, it is clear that they cannot apply for PRS.”[SP1] . “The level of education among most immigrants is very low. This factor has caused them to be reluctant to receive these services.”[SP3] .

The disruption of the immigrants’ rehabilitation process was greatly attributed to their low health literacy and lack of awareness regarding the role and responsibilities of rehabilitation professions.

“Many clients expect us to treat them like physicians. They expect to receive medicine, injections, and such interventions.”[SP2] .

Throughout the study, participants pointed out that immigrant-specific risk factors that can affect PRS use by disabled immigrants include: (1) immigration status (not being able to stay in accommodation centers, fearing arrest and deportation, and not being able to use public facilities); (2) assimilation (cultural mismatch); and (3) immigrant ethnic culture (not trusting modern treatments).

“Many immigrants who live in the country illegally are not allowed to attend the residence centers. This issue has caused many of them not to come to a city like Mashhad to receive services.” [SP1] .

Macro-structural/contextual factors

Macro-structural/contextual factors include general factors (government policy, healthcare system, social, economic, and political conditions) and immigrant specific factors (context of emigration, assimilation, and health service utilization in the homeland).

Regarding government policy, a number of participants believed that there was no comprehensive policy for providing PRS in Iran.

“In Iran, various institutions provide PRS. It is not easy to benefit from these services, even for Iranians. In fact, there is no documented and detailed plan for the effective provision of these services in Iran.”[SP2] .

In addition, some immigrants stated that although there are various laws related to the importance of their utilization of health services in Iran, these laws are strict and not very facilitating.

“Although according to the law, we can receive health services, when receiving services, we have to present our residence documents, which makes many Afghans refuse to receive services.” [S14] .

The participants pointed out that because there is not enough information about the available PRS and how to benefit from them, many immigrants in need cannot benefit from these services.

“We don’t know much about the services available and how we can benefit from them.” [S4] .

There are also various challenges in Iran’s health system that make it difficult for immigrants to benefit from PRS.

“Rehabilitation problems not only for immigrants but also for Iranians have not been among the priorities of health policymakers.” [SP2] .

Additionally, the private sector in Iran provides a sizable portion of PRS. This issue, along with the small insurance coverage of PRS, has caused the out-of-pocket payment to be very high when receiving services.

“Many PRS in Iran are provided by the private sector, and since the insurance coverage of these services is not significant in Iran, the recipient is forced to pay almost all costs out of pocket. This issue is difficult for immigrants who do not have much income.” [SP2] .

On the other hand, receiving PRS, especially in government and charity centers, is associated with long waiting lists.

“Sometimes the waiting lists are so long that we give up getting services.” [S6] .

The high workload of physical rehabilitation professions was another challenge that was mentioned during the interview sessions. One of the participants said:

“They [service providers] are very busy, and we cannot ask them questions, and we have to wait in line.” [S7] .

Many participants believed that Iran’s economic situation, especially high inflation, has greatly affected immigrants’ use of health services, especially PRS.

“Everything is becoming more expensive in Iran. This has caused us to no longer have the capacity to receive these services.” [S1] .

In addition, there is no significant social support for immigrants and refugees to benefit from health services, including PRS, in Iran.

“There are not many support conditions here for people like me.” [S3] .

In addition to all these cases, the infrastructure in urban environments affects the mobility of potential rehabilitation users.

“Because I am in a wheelchair, it is very difficult to get around. It is difficult to cross the street, even if it is not far.” [S3] .

Enabling factors in utilizing physical rehabilitation services

The participants believed that strengthening financial resources could significantly improve their use of PRS. In this regard, several enabling factors were proposed during the interviews, including strengthening insurance coverage, increasing public funding, using charities’ and NGOs’ financial resources, using the capacity of employers, and creating an insurance fund for immigrants.

“Even immigrants who have a residence permit and insurance have to pay most of the costs themselves when receiving services. It is necessary to increase the insurance coverage to receive more services.” [S16] . “The financial capacity of non-governmental organizations and charities can be used to finance the PRS needed by immigrants.” [SP2] .

One of the PRS providers stated that even if the necessary financial resources are available, it is necessary to allocate these resources in a targeted manner in order to avoid their waste.

“Usually, in many countries, even when domestic and foreign financial resources are provided to provide health services for groups such as immigrants, they are not used effectively. Therefore, it is necessary to consider a targeted process in the allocation of designated resources.” [SP3] .

The existence of social support from both Iranians and immigrant networks can significantly facilitate the benefits of PRS for immigrants with disabilities.

“The neighbors (Iranians) guided us and said that centers like the Red Cross can help you.” [S7] .

Regarding informational resources, participants believed that providing information about available services and holding educational campaigns about PRS could increase the use of these services.

“There is a need to provide accurate information about the available services.” [SP2] .

Afghan interpreters can also be used for groups that are not fluent in Persian in order to facilitate communication with the physical rehabilitation staff.

“For immigrants who cannot speak Iranian Farsi, Afghans who have a longer residence history in Iran can be used to facilitate communication with service providers.” [SP2] .

The participants stated that various factors can facilitate the access of immigrants with disabilities to PRS, including the use of mobile rehabilitation teams and improving access to existing rehabilitation centers.

“Because many of these groups [disabled immigrants] live in remote areas and slums, mobile rehabilitation teams can be created to provide them with essential services.” [SP1] .

In addition, it is necessary that the behavior of providers be such that disabled immigrants have more motivation to refer. The participants believed that the respectful behavior of the providers, as well as allocating enough time to examine and rehabilitate them, was very effective in helping them benefit from these services.

“The providers treat us very well and spend enough time with us; this is very pleasant for us.” [S3] .

In relation to socio-economic status, one of the providers stated that having an educated family can partially facilitate the use of PRS, especially for children.

“Immigrants with more educated families are more likely to apply for services.” [SP3] .

Also, users’ trust to PRS providers enhances the referral and also their adherence to the treatment and rehabilitation process.

“People who trust these services and providers tend to be more adherent to prescribed interventions.” [SP2] .

According to the findings, not considering the residence status of immigrants can increase their benefits from services. One of the service providers stated:

“I don’t think the residence status of immigrants should be asked when providing services. Because anyway, a significant share of them are undocumented.” [SP5] .

Furthermore, it is necessary to facilitate and speed up the registration process for newly arrived immigrants.

“Our [immigrants] registration process in Iran takes a lot of time. I think it should facilitate our registration process.” [S12] .

Regarding assimilation, the participants stated that it is necessary to provide the necessary platform for cultural integration for immigrants.

“Although Afghans and Iranians are two neighboring countries, there are still cultural differences that need to be addressed for the cultural integration of immigrants.” [SP2] .

Many Afghan immigrants do not have much faith or trust in modern medical interventions. Therefore, after becoming disabled, they turn to local interventions, which worsen their health condition. In response, it is necessary to increase their level of awareness and trust in these services by using educational packages.

“Many immigrants do not pay attention to the interventions provided by us. Most of them try to treat and rehabilitate using their religious and local beliefs.” [SP7] .

During the interview sessions, participants mentioned several enabling factors related to government policy. They believed that providing PRS to disabled immigrants requires knowing the level of need for these services among immigrants.

“A comprehensive needs assessment should be done at the country level so that policy- and decision-makers are fully aware of the demand for these services among refugees and immigrants.” [SP1] .

In addition, information about existing laws and regulations related to the benefits of immigrants and refugees to health services, including PRS, should be provided accurately and continuously.

“A mechanism should be provided so that this group [immigrants] has enough knowledge about the existing laws to access services.” [SP3] .

The interviewees stated that providing job skills training courses for immigrants can improve their use of health services, including PRS, while strengthening their economic status. Also, while separating the process of providing health services from politics, efforts should be made to attract international financial resources in order to provide health services to these people.

“Annually, significant financial resources are allocated by international organizations to improve the health of immigrants and asylum seekers worldwide. I think we should also try to attract these financial resources.” [SP4] .

Clarifying the role of each stakeholder in providing PRS to immigrants as well as promoting interdepartmental colleagues were among the other suggestions presented in this study to increase the benefits of PRS for immigrants.

Regarding the healthcare system, participants, especially PRS providers, believed that a specific service package should be determined for immigrants and that major investment should be focused on services with preventive effects. In addition, the referral of disabled immigrants to other levels of the health system should be facilitated.

“It should be possible to create a referral system so that needy immigrants can easily access other levels of the health system to follow up on their rehabilitation process.” [SP7] .

Other suggestions provided by the participants in this category include the following: training the practitioners regarding the health needs of immigrants; community-based education to improve health literacy; increasing immigrants’ awareness of available services and eligibility criteria; and investment in modern technology in delivering PRS. Also, one of the providers believed that in order to provide quality services, it was necessary to have a strict evaluation and monitoring system for PRS.

“In order to provide adequate quality services, they must be continuously evaluated and monitored.” [SP3] .

During the interviews, it was stated that the participation of charities and non-governmental organizations in providing PRS can be significantly effective.

“There are many domestic and international charities and NGOs that are interested in providing such services to vulnerable groups, such as immigrants.” [S4] .

Efforts to stabilize the employment of immigrants, including the certification of health service providers, were intended to strengthen the economic status of immigrants. They believed that such conditions would increase their use of PRS.

“Until immigrants’ employment and income status is resolved, they will be reluctant to use PRS, which is costly.” [SP3] .

Also, it was stated that increasing the awareness and sensitivity of the general society towards the health status of immigrants and refugees can facilitate the use of services by immigrants.

“In my opinion, the general society and especially policymakers should be sensitive to the health status of immigrants and its effect on other members of society so that these services can be provided for them.” [SP5] .

The findings shed light on several key factors affecting access to PRS, including immigrants’ chronic problems, a lack of financial and social resources, and inadequate insurance coverage. Also, risk factors like gender, cultural inaccessibility, and a large household size, which makes it hard to pay, as well as immigrant-specific factors like immigration status, assimilation, and immigrant ethnic culture, played a big role in disabled immigrants’ use of PRS. The need for healthcare was identified as one of several factors inhibiting the use of PRS. Many disabled immigrants faced preexisting and emerging health conditions, which made them most in need of rehabilitation services. Chronic, long-term conditions, as well as new and emerging diseases, are common among the refugees. Half of the American adult refugee sample had at least one chronic NCD (51.1%) [ 34 ].

Barriers in utilizing PRS among immigrants and refugees with disabilities

Immigrants are among the most in need of rehabilitation care, yet they are often the least able to access and use culturally adapted services due to social, economic, and political barriers [ 35 ]. Weak communication with healthcare providers, difficulties with recording refugees’ health data, tradition and culturally related aspects of healthcare-seeking behavior, and some language barriers are reported as the main barriers to accessing health services in Iran [ 36 ]. Furthermore, the lack of financial and social resources proved to be a major barrier, as disabled immigrants encountered challenges related to high costs of services, transportation, and accommodation. Insurance coverage was inadequate, and for some of them it doesn’t exist, leading to substantial out-of-pocket expenses, particularly in the private sector, where a significant portion of PRS in Iran is provided. Other studies also suggested that cost is the principal barrier to care-seeking for the refugees, and an inequitable healthcare policy in terms of insurance coverage was cited as a major barrier to accessing care [ 37 ]. Poor health literacy and the lack of awareness of one’s right to healthcare; language and cultural differences; protection issues resulting from a lack of legal status; and an inability to afford healthcare due to inadequate livelihoods are among the barriers to accessing health services by immigrants in Malaysia [ 38 ].

Shortage of infrastructures and geographical barriers

Additionally, the unavailability of PRS in remote areas and a shortage of experienced human resources in these regions further limited access to PRS for immigrants with disabilities, while most of them reside in marginalized and deprived regions.

Cultural barriers

Predisposing factors, such as gender, cultural inaccessibility, and high poverty rates among immigrants, were also identified as significant barriers. Cultural mismatches and a lack of trust in modern treatments among immigrant ethnic cultures were also noted as influential factors impeding PRS utilization. Settlement in suburban areas with limited public transportation and a lack of linguistically, culturally, and gender-appropriate services negatively affected access to and use of healthcare services for refugees, as reported in several studies [ 39 , 40 , 41 ]. The lack of cultural adaptation, such as insufficient information, as well as the location of the rehabilitation center and language barriers, affected the accessibility of the services to immigrant families in Norway [ 42 ]. Barriers to establishing social networks and utilizing available disability services were reported among fathers with disabled children [ 43 ].

Political barriers

The specific immigration status of disabled immigrants contributed to their inability to access public facilities and their fear of arrest and deportation, which discouraged them from seeking PRS. Physical and other impairments will often make it more difficult for individuals to access safety and relief opportunities. However, PwD have been ignored for far too long [ 44 ].

Macro-structural level barriers

The lack of comprehensive policies for providing PRS, either for Iranians or immigrants in Iran, posed a substantial barrier. Health policymakers did not prioritize PRS, which resulted in a lack of knowledge about the services that were available and how to access them. The lack of knowledge about the structure of Iran’s health system, the laws related to immigrants, and the existing support for immigrants, especially immigrants with disabilities, has caused the level of service utilization by immigrants to be low. Complications in navigating the healthcare system are identified as a barrier in other countries [ 45 ] and health system barriers include external resource constraints, costs to the individual, discrimination, and high bureaucratic requirements [ 46 ].

Government and charity centers faced long waiting lists for PRS, hindering timely access. Moreover, the high workload of physical rehabilitation professionals further strained the availability of services. Additionally, high inflation and unemployment rates pose significant challenges for immigrants and refugees with disabilities seeking PRS in Iran. These barriers are exacerbated in the cases of immigrants and refugees who are undocumented, which leads to their lack of legal access to healthcare services. A study that specifically targeted undocumented migrants revealed that 62% of them had unmet health needs, while 53% had major difficulties accessing health services. Key obstacles included cost-related issues and extended waiting lists [ 47 ].

Enabling factors in utilizing PRS among immigrants and refugees with disabilities

Despite the barriers, several enabling factors were reported by participants to improve PRS utilization.

Financial support policies

Strengthening financial resources, insurance coverage, and public funding were proposed to reduce the financial burden on immigrants and refugees with disabilities seeking PRS. Addressing income inequality, improving access to social and developmental services, and improving the cultural sensitivity of services have been recommended in other studies [ 24 ]. Leveraging resources from charities, NGOs, and employers was also suggested to enhance accessibility.

Macro-structural level interventions

Improving information dissemination about available services and conducting educational campaigns were identified as potential enabling factors. Facilitating the referral process to other levels of the health system, training practitioners to understand immigrants and refugees health needs, promoting community-based education, and investing in modern technology were also recommended strategies. Tofani et al. have mentioned that removing barriers to rehabilitation and assistive technology for refugees with disabilities should focus on health literacy and the empowerment of migrants, data collection on health, disability, and assistive technology, and the organization of community-based rehabilitation programs [ 11 ]. A US-based study recommends an expanded pool of medical interpreters, peer navigators, innovative health information technologies, and greater collaboration and information sharing between service systems to address barriers affecting disabled and chronically ill refugees [ 48 ].

Providing job skills training courses and improving economic status were suggested to empower immigrants and refugees with disabilities to access PRS. Separating the provision of health services from politics could attract international financial resources to cater to the healthcare needs of immigrants and refugees. Collaboration among departments and investment in services with preventive effects were deemed beneficial. Additionally, the participation of charities and NGOs in providing PRS and efforts toward the stable employment of immigrants and refugees were considered valuable steps to enhance utilization. Host countries should assess the mental health of newcomers alongside physical evaluations, grant humanitarian migrants’ access to regular health care, and ensure they are able to use it [ 49 ].

Improving geographical access

Using mobile rehabilitation teams and improving access to existing centers could address the problem of unavailability in remote areas. Other study has shown that utilizing mobile rehabilitation teams and telerehabilitation could address the problem of unavailability in remote areas for immigrants and refugees [ 50 ].

Cultural interventions

Moreover, improving the behavior of service providers and allocating sufficient time to each patient were considered crucial to promoting utilization. Addressing predisposing factors, such as facilitating the registration process for newly arrived immigrants and refugees and promoting cultural integration, was proposed to improve PRS utilization. Building trust in modern PRS through educational packages and increasing their literacy level were also highlighted as important enabling factors. Healthcare providers should help clients become more aware of the resources available to them in the hospital and in the community. More time should be allotted when working with immigrant and refugee families to build rapport [ 35 ].

Limitations

This qualitative study is subject to certain limitations. Firstly, the study samples were drawn exclusively from the five provinces in Iran with the highest immigrant populations, which may limit the generalizability of the findings to a broader national context. Secondly, the interviews were conducted with providers and individuals who had visited rehabilitation centers of NGOs and national societies. However, it is important to acknowledge that a significant proportion of immigrants and refugees in need of PRS may not access these centers due to various factors, which were not explored within the scope of this study. This limited our ability to capture the perspectives of those who face barriers preventing them from seeking PRS. Consequently, the study’s findings may not fully represent the experiences and challenges of this underserved population. In response, future studies should investigate the barriers to accessing these services among all immigrants and refugees, not just the group that visits rehabilitation centers. In addition, quantitative studies are needed to monitor the real demands for receiving different types of PRS among immigrants and refugees in Iran.

Our study shed light on the barriers to utilizing needed PRS among Afghan immigrants and refugees with disabilities and showed that there are multiple cultural, financial, social, and political barriers that make this group struggle to utilize PRS. Recognizing the multifaceted nature of these challenges and opportunities is not only crucial but imperative. By proactively addressing these factors, stakeholders can chart a path towards devising meticulously targeted interventions and formulating policies that foster equitable access and heightened utilization. The potential impact of such measures extends far beyond numerical improvements; it resonates in the tangible enhancement of health outcomes for this particularly vulnerable and underserved population. Nevertheless, acknowledging the complexity of their experiences and the dynamic nature of this field underscores the ongoing importance of research and collaboration to continually refine and adapt these interventions, ensuring a responsive and inclusive healthcare landscape for all. The international and national authorities should push access to PRS services for the refugees on the agenda to make sure, with the high rate of disability among these vulnerable people, access to high-quality, affordable PRS services is well defined and introduced to this community.

Data availability

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Koczan Z, Peri G, Pinat M, Rozhkov D. The impact of international migration on inclusive growth: A review. IMF Working Paper No. 2021/088, Available at SSRN: https://ssrn.com/abstract=4026261 2021.

Gushulak BD, Weekers J, MacPherson DW. Migrants and emerging public health issues in a globalized world: threats, risks and challenges, an evidence-based framework. Emerg Health Threats J. 2009;2(1):7091.

Article   Google Scholar  

Elsby MW, Smith J, Wadsworth J. Population growth, immigration and labour market dynamics. 2021.

Buchan J, Campbell J, Dhillon I, Charlesworth A. Labour market change and the international mobility of health workers. Health Foundation working paper. 2019, 5.

Brennan N, Langdon N, Bryce M, Burns L, Humphries N, Knapton A, Gale T. Drivers and barriers of international migration of doctors to and from the United Kingdom: a scoping review. Hum Resour Health. 2023;21(1):11.

Article   CAS   PubMed   PubMed Central   Google Scholar  

Shahabi S, Skempes D, Pardhan S, Jalali M, Mojgani P, Lankarani KB. Nine years of war and internal conflicts in Syria: a call for physical rehabilitation services. Disabil Soc. 2021;36(3):508–12.

Boggs D, Atijosan-Ayodele O, Yonso H, Scherer N, O’Fallon T, Deniz G, Volkan S, Örücü A, Pivato I, Beck AH. Musculoskeletal impairment among Syrian refugees living in Sultanbeyli, Turkey: prevalence, cause, diagnosis and need for related services and assistive products. Confl Health. 2021;15:1–14.

Google Scholar  

World Health Organization. 2023. Geneva: World Health Organization. Accessed here: https://www.who.int/news-room/fact-sheets/detail/rehabilitation .

United Nations High Commissioner for Refugees. Global forced displacement tops 70 million. 2019. Accessed here: https://www.unhcr.org/news/stories/global-forced-displacement-tops-70-million .

UN. Thematic study on the rights of persons with disabilities under article 11 of the Convention on the Rights of Persons with Disabilities, on situations of risk and humanitarian emergencies. In.: A/HRC/31/30; 2015.

Tofani M, Iorio S, Berardi A, Galeoto G, Conte A, Fabbrini G, Valente D, Marceca M. Disability, Rehabilitation, and Assistive Technologies for Refugees and Asylum Seekers in Italy: policies and challenges. Societies. 2023;13(3):63.

Burnett A, Peel M. Health needs of asylum seekers and refugees. BMJ. 2001;322(7285):544–7.

Al-Oraibi A, Hassan O, Chattopadhyay K, Nellums L. The prevalence of non-communicable diseases among Syrian refugees in Syria’s neighbouring host countries: a systematic review and meta-analysis. Public Health. 2022;205:139–49.

Article   CAS   PubMed   Google Scholar  

Divkolaye NSH, Burkle FM Jr. The enduring health challenges of Afghan immigrants and refugees in Iran: a systematic review. PLoS Curr 2011, 9.

Khan F, Amatya B. Refugee health and rehabilitation: challenges and response. J Rehabil Med. 2017;49(5):378–84.

Article   PubMed   Google Scholar  

Oner O, Kahilogullari AK, Acarlar B, Malaj A, Alatas E. Psychosocial and cultural needs of children with intellectual disability and their families among the Syrian refugee population in Turkey. J Intellect Disabil Res. 2020;64(8):644–56.

King J, Edwards N, Correa-Velez I, Hair S, Fordyce M. Disadvantage and disability: experiences of people from refugee backgrounds with disability living in Australia. Disabil Global South. 2016;3(1):843–64.

Schröder CC, Breckenkamp J, du Prel JB. Medical rehabilitation of older employees with migrant background in Germany: does the utilization meet the needs? PLoS ONE. 2022;17(2):e0263643.

Article   PubMed   PubMed Central   Google Scholar  

Kim KM, Hwang SK. Being a ‘good’mother: immigrant mothers of disabled children. Int Social Work. 2019;62(4):1198–212.

Khanlou N, Haque N, Sheehan S, Jones G. It is an issue of not knowing where to go: Service Providers’ perspectives on challenges in accessing social support and services by Immigrant Mothers of Children with disabilities. J Immigr Minor Health. 2015;17(6):1840–7.

Zimba O, Gasparyan AY. Refugee Health: A Global and Multidisciplinary Challenge. J Korean Med Sci 2023, 38(6).

Yang PQ, Hwang SH. Explaining immigrant health service utilization: a theoretical framework. Sage Open. 2016;6(2):2158244016648137.

Dew A, Lenette C, Wells R, Higgins M, McMahon T, Coello M, Momartin S, Raman S, Bibby H, Smith L, et al. In the beginning it was difficult but things got easier’: Service use experiences of family members of people with disability from Iraqi and Syrian refugee backgrounds. J Policy Pract Intellect Disabil. 2023;20(1):33–44.

Khanlou N, Mustafa N, Vazquez LM, Haque N, Yoshida K. Stressors and barriers to services for immigrant fathers raising children with Developmental Disabilities. Int J Mental Health Addict. 2015;13(6):659–74.

Dyck M, Breckenkamp J, Wicherski J, Schröder CC, du Prel J-B, Razum O. Utilisation of medical rehabilitation services by persons of working age with a migrant background, in comparison to non-migrants: a scoping review. Public Health Rev. 2020;41(1):17.

Haar RJ, Wang K, Venters H, Salonen S, Patel R, Nelson T, Mishori R, Parmar PK. Documentation of human rights abuses among rohingya refugees from Myanmar. Confl Health 2019, 13(1).

Polack S, Scherer N, Yonso H, Volkan S, Pivato I, Shaikhani A, Boggs D, Beck AH, Atijosan-Ayodele O, Deniz G. Disability among Syrian refugees living in Sultanbeyli, Istanbul: results from a population-based survey. PLoS ONE. 2021;16(11):e0259249.

Norredam M, Nielsen SS, Krasnik A. Migrants’ utilization of somatic healthcare services in Europe—a systematic review. Eur J Pub Health. 2010;20(5):555–63.

O’Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: a synthesis of recommendations. Acad Med. 2014;89(9):1245–51.

Long HA, French DP, Brooks JM. Optimising the value of the critical appraisal skills programme (CASP) tool for quality appraisal in qualitative evidence synthesis. Res Methods Med Health Sci. 2020;1(1):31–42.

Saunders B, Sim J, Kingstone T, Baker S, Waterfield J, Bartlam B, Burroughs H, Jinks C. Saturation in qualitative research: exploring its conceptualization and operationalization. Qual Quant. 2018;52:1893–907.

Assarroudi A, Heshmati Nabavi F, Armat MR, Ebadi A, Vaismoradi M. Directed qualitative content analysis: the description and elaboration of its underpinning methods and data analysis process. J Res Nurs. 2018;23(1):42–55.

Kyngäs H, Kääriäinen M, Elo S. The trustworthiness of content analysis. The application of content analysis in nursing science research. edn.: Springer; 2020. pp. 41–8.

Yun K, Hebrank K, Graber LK, Sullivan M-C, Chen I, Gupta J. High prevalence of chronic non-communicable conditions among adult refugees: implications for practice and policy. J Community Health. 2012;37:1110–8.

Lindsay S, King G, Klassen AF, Esses V, Stachel M. Working with immigrant families raising a child with a disability: challenges and recommendations for healthcare and community service providers. Disabil Rehabil. 2012;34(23):2007–17.

Rahimitabar P, Kraemer A, Bozorgmehr K, Ebrahimi F, Takian A. Health condition of Afghan refugees residing in Iran in comparison to Germany: a systematic review of empirical studies. Int J Equity Health 2023, 22(1).

Mohammadi S, Carlbom A, Taheripanah R, Essén B. Experiences of inequitable care among Afghan mothers surviving near-miss morbidity in Tehran, Iran: a qualitative interview study. Int J Equity Health 2017, 16(1).

Chuah FLH, Tan ST, Yeo J, Legido-Quigley H. The health needs and access barriers among refugees and asylum-seekers in Malaysia: a qualitative study. Int J Equity Health. 2018;17(1):1–15.

Guruge S, Sidani S, Illesinghe V, Younes R, Bukhari H, Altenberg J, Rashid M, Fredericks S. Healthcare needs and health service utilization by Syrian refugee women in Toronto. Confl Health. 2018;12(1):1–9.

BMC Pregnancy and Childbirth. 2020, 20(1).

Dadras O, Dadras F, Taghizade Z, Seyedalinaghi S, Ono-Kihara M, Kihara M, Nakayama T. Barriers and associated factors for adequate antenatal care among Afghan women in Iran; findings from a community-based survey. BMC Pregnancy Childbirth 2020, 20(1).

Arfa S, Solvang PK, Berg B, Jahnsen R. Participation in a rehabilitation program based on adapted physical activities in Norway: a qualitative study of experiences of immigrant parents and their children with disabilities. Disabil Rehabil. 2022;44(9):1642–9.

Alsharaydeh E, Alqudah M, Lee R, Chan S. Challenges, coping and resilience in caring for children with disability among immigrant parents: a mixed methods study. J Adv Nurs. 2023;79(6):2360–77.

Smith-Khan L, Crock M. The highest attainable standard’: the right to health for refugees with disabilities. Societies. 2019;9(2):33.

Asif Z, Kienzler H. Structural barriers to refugee, asylum seeker and undocumented migrant healthcare access. Perceptions of doctors of the World caseworkers in the UK. SSM-Mental Health. 2022;2:100088.

Hacker K, Anies M, Folb BL, Zallman L. Barriers to health care for undocumented immigrants: a literature review. Risk Manage Healthc Policy 2015:175–83.

Schottland-Cox J, Hartman J. Physical therapists needed: the Refugee Crisis in Greece and our ethical responsibility to Respond. Phys Ther. 2019;99(12):1583–6.

Mirza M, Luna R, Mathews B, Hasnain R, Hebert E, Niebauer A, Mishra UD. Barriers to healthcare access among refugees with disabilities and chronic health conditions resettled in the US Midwest. J Immigr Minor Health. 2014;16:733–42.

Matlin SA, Depoux A, Schütte S, Flahault A, Saso L. Migrants’ and refugees’ health: towards an agenda of solutions. Public Health Rev. 2018;39(1):27.

Article   PubMed Central   Google Scholar  

Castillo YA, Cartwright J. Telerehabilitation in Rural Areas: A Qualitative Investigation of Pre-service Rehabilitation Professionals’ Perspectives. sgrjarc (2):6–13.

Download references

Acknowledgements

This paper and the research behind it would not have been possible without the exceptional support of all the contributors who agreed to donate their time and ideas and take part in this study.

This research was supported by the Shiraz University of Medical Sciences, Shiraz, Iran (No: 27348).

Author information

Authors and affiliations.

The International Committee of the Red Cross, Tehran Delegation, Tehran, Iran

Elaheh Amini

The National Institute for Health and Care Research Applied Research Collaboration West (NIHR ARC West) at University Hospitals Bristol and Weston NHS Foundation Trust, Bristol, UK

Manal Etemadi

Population Health Sciences, Bristol Medical School, University of Bristol, Bristol, UK

Health Policy Research Center, Institute of Health, Shiraz University of Medical Sciences, Shiraz, Iran

Saeed Shahabi, Farzaneh Honarmandi, Marzieh Karami Rad & Kamran Bagheri Lankarani

Centre for Primary Care and Public Health (Unisanté), University of Lausanne, Lausanne, Switzerland

Cornelia Anne Barth

Student Research Committee, Shiraz University of Medical Sciences, Shiraz, Iran

Marzieh Karami Rad

You can also search for this author in PubMed   Google Scholar

Contributions

S.SH, E.A, and K.B.L contributed to the conception and design of the study. E.A. conducted the interviews, and S.SH was co-moderator. S.SH and M.E conducted most of the analysis, which K.B.L and M.K.R discussed regularly. S.SH, E.A and M.E wrote the initial draft, and K.B.L, F.H, B.C.A, and M.K.R contributed to manuscript revisions. All authors read and confirmed the final manuscript.

Corresponding author

Correspondence to Saeed Shahabi .

Ethics declarations

Ethics approval and consent to participate.

The Research Ethics Committee of the Shiraz University of Medical Sciences provided the ethical approval for this study (IR.SUMS.REC.1401.674) previously. All methods were performed in accordance with the relevant guidelines and regulations such as Declarations of Helsinki. Informed consent for participating in this study was obtained from all the participants before the interview sessions.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Amini, E., Etemadi, M., Shahabi, S. et al. Barriers and enabling factors for utilizing physical rehabilitation services by Afghan immigrants and refugees with disabilities in Iran: a qualitative study. BMC Public Health 24 , 893 (2024). https://doi.org/10.1186/s12889-024-18374-4

Download citation

Received : 22 August 2023

Accepted : 17 March 2024

Published : 25 March 2024

DOI : https://doi.org/10.1186/s12889-024-18374-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Rehabilitation
  • Qualitative study

BMC Public Health

ISSN: 1471-2458

methods of data analysis for qualitative research

  • Open supplemental data
  • Reference Manager
  • Simple TEXT file

People also looked at

Original research article, exploring primary care physicians’ challenges in using home blood pressure monitoring to manage hypertension in singapore: a qualitative study.

methods of data analysis for qualitative research

  • 1 SingHealth Polyclinics, Singapore, Singapore
  • 2 SingHealth Duke-NUS Family Medicine Academic Clinical Programme, Singapore, Singapore

Objective: Hypertension guidelines recommend using home blood pressure (HBP) to diagnose, treat and monitor hypertension. This study aimed to explore the challenges primary care physicians (PCPs) face in using HBP to manage patients with hypertension.

Method: A qualitative study was conducted in 2022 at five primary care clinics in Singapore. An experienced qualitative researcher conducted individual in-depth interviews with 17 PCPs using a semi-structured interview guide. PCPs were purposively recruited based on their clinical roles and seniority until data saturation. The interviews were audio-recorded, transcribed verbatim and managed using NVivo qualitative data management software. Analysis was performed using thematic analysis.

Results: PCPs identified variations in patients’ HBP monitoring practices and inconsistencies in recording them. Access to HBP records relied on patients bringing their records to the clinic visit. A lack of seamless transfer of HBP records to the EMR resulted in an inconsistency in documentation and additional workload for PCPs. PCPs struggled to interpret the HBP readings, especially when there were BP fluctuations; this made treatment decisions difficult.

Conclusion: Despite strong recommendations to use HBP to inform hypertension management, PCPs still faced challenges accessing and interpreting HBP readings; this makes clinical decision-making difficult. Future research should explore effective ways to enhance patient self-efficacy in HBP monitoring and support healthcare providers in documenting and interpreting HBP.

Introduction

Treatment of hypertension relies heavily on accurate BP measurement. However, office BP alone is inadequate to assess BP control accurately due to BP variability and the existence of various hypertension phenotypes such as whitecoat uncontrolled hypertension and masked hypertension ( 1 – 3 ). Increasingly, out-of-office BP measurements, including ambulatory BP and home BP (HBP), are being recommended in the diagnosis and monitoring of hypertension ( 4 ). Additionally, high ambulatory BP and HBP have been associated with increased cardiovascular morbidity and mortality independent of office BP measurements ( 5 , 6 ). HBP monitoring is superior to ambulatory BP monitoring in cost, accessibility and usability ( 7 , 8 ). Globally, HBP device ownership varies between 30 and 70% ( 9 – 15 ). As a population known for embracing technology in a local study, more than 80% of Singaporean physicians recommended HBP monitoring to their patients and utilized these measurements to monitor the effects of anti-hypertensive therapy and make informed clinical decisions ( 16 ).

However, despite the increasing reliance on using HBP to diagnose, monitor and treat hypertension, studies have reported that PCPs face challenges in using HBP to manage patients with hypertension ( 17 , 18 ). These included inconsistency in HBP measurement, high out-of-pocket costs of home BP devices, and time needed to instruct patients on home BP monitoring procedures. A more recent study among American PCPs reported difficulty accessing patients’ HBP records and a lack of workflow to support patients using HBP devices ( 19 ). A local cross-sectional survey of 60 physicians (30 PCPs, 20 cardiologists and 10 nephrologists) identified patient inertia, poor patient adherence, short medical consultation time, and poor patient access to a BP machine as the barriers to implementing HBP monitoring in both hospital and primary care settings ( 16 ).

In primary care, where the majority of patients with hypertension are managed, the importance of HBP to inform prompt and accurate clinical decisions is even more pertinent, given the time constraints. Therefore, this study aimed to explore the challenges faced by PCPs when using HBP to manage patients with hypertension in their daily clinical practice. The findings of this study will help in the design of effective interventions in supporting healthcare providers to make informed decisions based on HBP. This study is part of a larger study to explore the challenges faced by doctors, nurses and pharmacists in managing patients with hypertension in a primary care setting.

Materials and methods

Study design.

A qualitative methodology using the descriptive-interpretive approach ( 20 ) explored the challenges in managing patients with hypertension, particularly on HBP.

Individual in-depth interviews (IDIs) of PCPs were conducted across five public primary care clinics located in the south and eastern region of Singapore (Bukit Merah, Marine Parade, Eunos, Sengkang and Pasir Ris). These primary care clinics provided hypertension care for a multi-ethnic Asian population; in 2022, there were over 200,000 patient-visits for hypertension with 850 attendances daily (based on the institution’s electronic medical record (EMR) system and business database).

Period of study

The field work was conducted between April 2022 and March 2023.

Research team

The research team comprised one female (ASM) and two male (PO, CJN) PCPs who worked in the institution where the study was conducted; they managed patients with hypertension in their routine clinical practice. All researchers are trained in qualitative research.

Study population and recruitment

The target study population was PCPs actively practicing in an urban public primary care setting. The eligible participants were PCPs aged 21 years and older who had been actively involved in managing patients with hypertension for the past 6 months. An email invitation containing the study purpose were sent by the senior author (CJN) to the PCPs to solicit participant interest. They were informed that participation was voluntary. Those who expressed interest were approached by a research coordinator to schedule a face-to-face interview and obtain informed consent. PCPs were recruited purposively based on their seniority and roles with an attempt to achieve maximal variation; the recruitment continued until no new themes emerged from the field notes and analysis (data saturation).

Study instruments

Participant demographic data collection form.

Participant demographic data collection forms were used to collect information about PCPs’ age, gender, clinical experience, qualification and designation.

Topic guide

The researcher used a semi-structured topic guide to guide the IDIs. The topic guide was developed based on Theory of Planned Behavior ( 21 ), literature review and discussion with the team members. The questions included PCP’s experiences, barriers and facilitators, and their needs when managing patients with hypertension in general and specifically on HBP monitoring. The topic guide was pilot-tested and iteratively modified based on data that emerged during both pilot and subsequent interviews.

Data collection

After obtaining the written consent, the participant completed the demographic data collection form, followed by the individual IDIs. The interviews were conducted face-to-face in a private room in the PCP’s practicing clinic. NCJ conducted all the interviews using the interview topic guide. Each interview lasted 45–60 min and all were audio-recorded. All participants were reimbursed with a grocery voucher of SGD20 (estimated USD15) to compensate for their time and effort.

Data analysis

All interviews were audio-recorded, transcribed verbatim and checked for accuracy. Three researchers (ASM, OP, CJN) read and reread three transcripts independently and repeatedly to familiarize themselves with the data. Each researcher coded the transcript line-by-line to generate a list of initial codes (open coding). The research team met regularly to discuss the codes; any coding discrepancies were resolved via consensus. Codes with similar content were grouped into categories; these categories were further rearranged to create an Initial coding frame (axial coding). Based on the coding frame, the rest of the transcripts were divided and analysed individually by the three researchers (ASM, OP, CJN) using the NVivo© (QRS Pty Ltd., Australia) qualitative data management software. The researchers performed constant comparison of the content of each code, which were reviewed, revised, refined, and re-named through discussion and consensus. Finally, a report of the themes generated, including the challenges faced by PCPs when using HBP to manage patients, was written with representative quotes to illustrate the key findings. No new challenges emerged after 12 interviews; further five PCPs were interviewed to ensure the data has reached saturation ( 22 , 23 ).

For the data collection, a senior researcher conducted all the interview; this helped to maintain the quality and consistency of the interviews. During the analysis, the research team met regularly to share, discuss and ‘challenge’ their data interpretations. The team adopted an open approach during the meetings and constantly reflected and debated potential biases they might carry due to their background as PCPs. The demographic data collection forms, audio recordings, transcripts, field notes, coding frame and codes were maintained in secure archives to establish a clear audit trail.

A total of 19 PCPs were approached, and 17 agreed and participated in the IDIs (response rate 89%). They were aged between 29 years and 80 years and had between 8 months and 40 years of clinical experience in primary care. The details of their demographic characteristics are presented in Table 1 .

www.frontiersin.org

Table 1 . Characteristics of participating primary care physicians ( n  = 17).

PCPs faced five main challenges when using HBP to manage patients with hypertension during their daily practice. They were: variations in patients’ home BP monitoring practices, inconsistent home BP recordings, reliance on patient to access the HBP records, laborious process of transferring patient HBP records to EMR, and difficulty in interpreting HBP records ( Figure 1 ).

www.frontiersin.org

Figure 1 . Challenges faced by primary care physicians on managing patients based on home blood pressure.

Variations in patients’ HBP monitoring practices

PCPs reported a variation in patient adherence to HBP monitoring and how they measured BP. PCPs encountered patients who declined to monitor their HBP despite owning a BP set; they perceived this was due to a lack of patient motivation.

“it’s very varied practice, some say they own a set but they do not use it because nobody told them to use it or they do not feel the need to use it or they use it occasionally when they do not feel well.,” P08 (47-year-old, Consultant, 17 years of practice).
“Some of them (patients) will just dismiss and say, “I really do not have the time, I’m too tired.” For these patients, I feel that they do not really have the motivation...” P07 (36-year-old, Associate Consultant, 11 years of practice).
“I was told that he has a machine, but just too lazy, not motivated to check. So, this time he admitted, “Actually, I’m not even (checking), do not know where the machine is actually.” P17 (32-year-old, Family Physician, 4 years of practice).

PCPs were also concerned that patients might not be measuring the HBP correctly or the HBP monitoring devices might not be calibrated; this made them doubt the accuracy of the HBP readings.

“… because a lot of times if they do it wrongly or the machine is faulty then of course you are going to get very erroneous numbers. Sometimes, the artery marker is placed on the other side that is one thing. Sometimes it could be the position of their arm whether it’s at the heart level or not.” P14 (35-year-old, Family Physician, 8 years of practice).

Inconsistency in HBP documentation by patients

PCPs also reported a wide variation in how patients document their HBP readings. PCPs struggled to make treatment decisions when patients did not document their BP or bring their BP records during their clinic visit.

“And they may or may not, in fact more of them do not record their blood pressure. So, that is a challenge because when they come in and their blood pressure is high and you ask them, “Oh, what is your blood pressure?.” Some will say, “Oh, at home it’s normal” and you ask them, “What is that number?” and they say, “I do not know but it’s normal.” So, how do you know it’s normal when you do not have a number, right?,” P08 (47-year-old, Consultant, 17 years of practice). “A lot of times they thought that it’s (recording and bringing BP records) a hassle. Some will feel like it’s not going to change anything. “Anyway, I am only here to collect medicine,” but they do not understand the point that the BP reading, they are actually so valuable because they (make) me decide what to do with their medication.,” P11 (34-year-old, Family Physician, 3 years of practice).

Reliance on patient to access the HBP records

PCPs relied on their patients to bring their HBP records to the clinic. Despite the availability of institution-based digital HBP diaries (Health Buddy), the adoption of these digital tools was low especially among the older, less tech-savvy patient populations.

“In general, we got a fairly healthy home BP ownership rate. But the problem lies more with patients forgetting to bring the home BP readings. “, P16 (40-year-old, Consultant, 12 years of practice). “… I would say the take up rate or the use of these digital blood pressure diaries is still very low. Our population, most patients with hypertension are probably in the middle to elderly age group. Many of them are not that tech-savvy.” P08 (47-year-old, Consultant, 17 years of practice).

Laborious process of transferring patient HBP records to electronic medical record (EMR)

Currently, PCPs had to manually key in patients HBP records into the EMR during the consultation. They had limited time to review stacks of HBP records brought by patients and to document them in the EMR. Without proper documentation, PCPs would not be able to assess patients’ “BP trend” over time.

“Even for the digital BP platform itself, the information goes to a separate platform but it’s not linked to the EMR, which means you need to open up another system. That is an inconvenience for the doctors,” P16 (40-year-old, Consultant, 12 years of practice). “Sometimes my patient’s hypertension is not very well-controlled. Then, I will need to flip back (his old BP records). Some of them, bring the whole stack, then I will flip back the past few months to see where it went wrong. But some patients will just throw (the past BP records) away. There is no good way to store this information which the patient has spent so much effort providing you with,” P07 (36-year-oild, Associate Consultant, 11 years of practice).

Difficulty in interpreting home blood pressure

Some PCPs struggled to interpret the HBP readings, particularly when the HBP readings fluctuated significantly, or were at the borderline. PCPs were hesitant to use these BP measurements to make treatment decisions.

“And then, they bring along a set of readings that looks high, sometimes low, sometimes you just do not know what to do.” P07 (36-year-oild, Associate Consultant, 11 years of practice). “…I mean for those HBP readings that were a bit borderline, you cannot say with confidence whether they have hypertension or not. I mean they are some which are clear-cut but those like borderline, you cannot say for sure. I do not like to just put a label on the person if the evidence is not very strong.” P15 (36-year-old, Family Physician, 6 years of practice).

There was a wide practice variation in how PCPs interpret the HBP readings; most would interpret the HBP by ‘eyeballing’ the BP readings. PCPs recognized this practice gap and attributed it to the lack of guidelines on HBP monitoring and interpretation.

“And I would circle every reading that is above the clinical target. And after I’ve done that, I will look at the sheet of paper and ask myself, ‘Do I see more circled readings or non-circled readings?’ This is a very crude, rough way of doing it. Of course, the best way is to actually get the average but what is the practical way to do this very quickly in a 10-min consultation,” P08 (47 year-old, Consultant, 17 years of practice).
“Ya. I guess okay looking at the home readings… it’s like I have my own internal thing, I guess. If maybe less than 20% of the readings are too high then I’ll consider it as okay overall.’ P12 (37-year-old, Consultant, 9 years or practice). “Sometimes they will come with a paper. And then I would like review the readings to see the rough range of the readings, whether it is morning or at night and whether there is a dip in the nocturnal blood pressure or not. And then look at the majority average of the readings and see whether they are optimal or suboptimal. I do not think we have a guide on how it should be reviewed or assessed. So it might be doctor-specific.” P05 (29-year-old, Family Physician, 3 years of practice).

Our study highlights practical challenges PCPs encountered when using HBP to manage patients with hypertension. While most of the identified challenges related to patient behavior, including variations in HBP measurement and documentation, the study also surfaced important gaps in PCP competency and a system in transferring and integrating HBP data. The PCPs in this study did not perceive cost and access to HBP monitoring devices as major challenges in using HBP to manage patients with hypertension.

Similar to our study, inconsistencies in HBP measurement and recording pose a barrier to using these records to manage patients with hypertension ( 24 , 25 ). In a survey among 643 hypertensive patients, more than two-thirds (71%) reported incorrect home BP measuring techniques ( 18 ). In a qualitative study, PCPs believed that patients failed to follow correct protocols for rest and body positioning during HBP measurements ( 26 ). A local study by Setia et al. ( 16 ) found that PCPs perceived patient inertia and poor patient compliance as the most common barriers to implementing out-of-office BP monitoring. In addition, Al-rousan et al. ( 27 ) reported patients’ willingness to monitor HBP but had concerns about their ability to do so. While patient education by clinicians and nurses can improve HBP technique ( 25 , 28 ), the time needed to instruct patients on HBP measuring protocols may not be feasible or available in busy clinical settings ( 18 ). Remote patient education through videos and online platforms could help improve HBP practices and techniques while working within the existing time constraints ( 29 ).

Lack of integration of HBP records into the EMR limited PCPs’ access to patient’s records. A systematic review of 12 studies identified similar challenges, including manually transferring patient-reported HBP data from multiple sources and devices and comparing them with office BP ( 30 ). This process was also considered laborious in busy primary care clinics ( 30 ). While integrating HBP into clinical care is an important goal, achieving integration is complex ( 31 , 32 ). Teo et al. ( 32 ) reported local PCPs’ and patients’ appreciation of the convenience of an integrated platform where HBP was automatically captured and transferred to a dashboard for PCPs to review and manage patients’ HBP promptly; patients were reassured they were being monitored by the care team and PCPs perceived their time was better utilized. Nevertheless, the participants highlighted the challenges with the usability of the equipment, management portal and data access ( 32 ). Thus, while integrating HBP records into the EMR can improve the workflow for PCPs and improve patient clinical outcomes, the perspectives of PCP and patients need to be incorporated into designing such a system to enhance its long-term adoption ( 33 ).

Similar to our study findings, studies by Teo et al., Setia et al., and Fletcher et al. also reported uncertainty among clinicians in interpreting HBP readings, particularly in patients with borderline BP and BP variability ( 16 , 30 , 32 ). Global hypertension guidelines recommend target HBP for therapy to the level below the threshold used to diagnose hypertension. However, a variation in these diagnostic thresholds results in variation in therapeutic HBP targets. While American College of Cardiology/American Heart Association 2017 guidelines specify a daytime (awake) average ≥ 130 mmHg systolic or ≥ 80 mmHg diastolic BP ( 4 ), European Society of Cardiology /European Society of Hypertension, International Society of Hypertension and National Institute for Health Care Excellence define hypertension as an average HBP of ≥135 mmHg systolic or ≥ 85 mmHg diastolic BP ( 24 , 34 , 35 ). Discrepancy also exists in HBP interpretation across the guidelines. These differences include the required number and timing of readings, averaging methods, and controversy over the omission of first readings ( 36 , 37 ). The lack of updated local clinical guidelines and PCP awareness of existing global guidelines creates practice variations in hypertension management among local PCPs ( 16 ). Local updated practice guidelines and frequent refreshers can help standardize clinicians’ management practices and keep them abreast of the current best clinical evidence ( 38 ).

PCPs in our study attributed challenges in HBP interpretation to insufficient time to calculate the average BP, which can result in clinical inertia and suboptimum BP control ( 30 ). A lack of clinical decision-support system can also create difficulty in interpreting HBP data in a timely and accurate manner. Manually going through patients’ HBP records is time-consuming and may not be practical in a busy primary care setting ( 30 ). Clinical decision support integrated into the clinician workflow could facilitate data interpretation and prompt decision-making ( 39 ).

Unlike previous studies on HBP monitoring, the PCPs in this study did not perceive out-of-pocket cost and access to HBP monitors as major challenges ( 13 , 14 ). This could be due to the Singapore government’s active promotion of using technology for health management ( 40 ). Additionally, various government and philanthropic programs actively support patients with chronic conditions like hypertension afford and utilize digital health tools. Locally, a philanthropic-sponsored program offers self-monitoring devices, including HBP devices, to “needy” patients at half the original price ( 41 ). Since its launch in 2020, more than 400 patients per year have benefited from it ( 41 ). Thus, introducing reimbursement and financial assistance enhance the use of HBP among patients, particularly those from low socio-economic stratum.

Limitations

The study has several limitations. Firstly, PCPs in this study were from a single primary care cluster in Singapore. To improve the transferability of the study findings, primary care clinics serving a diverse patient population were included, and PCPs with differing clinical roles and levels of experience were recruited. Future studies may be conducted in the private primary care sector in Singapore; this would uncover challenges unique to the PCPs and patients in the private sector. Secondly, this study focused on the challenges faced by PCPs and did not capture the challenges patients face in performing HBP monitoring. We are currently embarking on a study to explore patients’ experiences and challenges in HBP monitoring in primary care setting.

Despite international guidelines recommending HBP to inform the diagnosis, treatment and monitoring of hypertension, PCPs face significant challenges in accessing adequate and accurately measured HBP readings; they also struggle to document and interpret them consistently in their busy clinical practice. Future studies should identify effective ways to use HBP to support patient self-care and facilitate improved care for patients with hypertension by developing practical guidelines and clinical decision-support systems on HBP monitoring.

Data availability statement

The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.

Ethics statement

The studies involving humans were approved by SingHealth Centralised Institutional Review Board (CIRB No: 2022/2517). The studies were conducted in accordance with the local legislation and institutional requirements. The participants provided their written informed consent to participate in this study.

Author contributions

AM: Conceptualization, Formal analysis, Investigation, Writing – original draft, Writing – review & editing, Data curation. PO: Conceptualization, Formal analysis, Investigation, Writing – review & editing, Data curation. CN: Conceptualization, Formal analysis, Funding acquisition, Investigation, Methodology, Supervision, Writing – review & editing, Data curation.

The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. The study is supported by SingHealth AM General Fund (14/FY2021/G2/01-A167), SingHealth Polyclinics and Duke-NUS Medical School.

Acknowledgments

The authors would like to express their gratitude to the SingHealth Polyclinics and primary care physicians who participated in the study.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fmed.2024.1343387/full#supplementary-material

Abbreviations

HBP, Home blood pressure; PCP, Primary care physician; IDI, In-depth interview; EMR, Electronic Medical Record.

1. Setia, S, Subramaniam, K, Tay, JC, and Teo, BW. Hypertension and blood pressure variability management practices among physicians in Singapore. Vasc Health Risk Manag . (2017) 13:275–85. doi: 10.2147/VHRM.S138694

PubMed Abstract | Crossref Full Text | Google Scholar

2. Pioli, MR, Ritter, AMV, de Faria, AP, and Modolo, R. White coat syndrome and its variations: differences and clinical impact. Integr Blood Press Control . (2018) 11:73–9. doi: 10.2147/IBPC.S152761

3. Carretero, OA, and Oparil, S. Essential hypertension. Part I: definition and etiology. Circulation . (2000) 101:329–35. doi: 10.1161/01.CIR.101.3.329

Crossref Full Text | Google Scholar

4. Cepeda, M, Pham, P, and Shimbo, D. Status of ambulatory blood pressure monitoring and home blood pressure monitoring for the diagnosis and management of hypertension in the US: an up-to-date review. Hypertens Res . (2023) 46:620–9. doi: 10.1038/s41440-022-01137-2

5. Piper, MA, Evans, CV, Burda, BU, Margolis, KL, O’Connor, E, and Whitlock, EP. Diagnostic and predictive accuracy of blood pressure screening methods with consideration of rescreening intervals: a systematic review for the U.S. preventive services task force. Ann Intern Med . (2015) 162:192–204. doi: 10.7326/M14-1539

6. Ward, AM, Takahashi, O, Stevens, R, and Heneghan, C. Home measurement of blood pressure and cardiovascular disease: systematic review and meta-analysis of prospective studies. J Hypertens . (2012) 30:449–56. doi: 10.1097/HJH.0b013e32834e4aed

7. Parati, G, Omboni, S, and Bilo, G. Why is out-of-office blood pressure measurement needed? Hypertension . (2009) 54:181–7. doi: 10.1161/HYPERTENSIONAHA.108.122853

8. Arrieta, A, Woods, JR, Qiao, N, and Jay, SJ. Cost-benefit analysis of home blood pressure monitoring in hypertension diagnosis and treatment: an insurer perspective. Hypertension . (2014) 64:891–6. doi: 10.1161/HYPERTENSIONAHA.114.03780

9. Devaraj, NK, Ching, SM, Tan, SL, Ramzdan, SN, Chia, YC, and Devaraj, NK. A14814 ownership of home blood pressure devices among patients with hypertension in primary care. J Hypertens . (2018) 36:e172. doi: 10.1097/01.hjh.0000548699.65798.18

10. McManus, RJ, Wood, S, Bray, EP, Glasziou, P, Hayen, A, Heneghan, C, et al. Self-monitoring in hypertension: a web-based survey of primary care physicians. J Hum Hypertens . (2014) 28:123–7. doi: 10.1038/jhh.2013.54

11. Bancej, CM, Campbell, N, McKay, DW, Nichol, M, Walker, RL, and Kaczorowski, J. Home blood pressure monitoring among Canadian adults with hypertension: results from the 2009 survey on living with chronic diseases in Canada. Can J Cardiol . (2010) 26:e152–7. doi: 10.1016/S0828-282X(10)70382-2

12. Cuspidi, C, Meani, S, Lonati, L, Fusi, V, Magnaghi, G, Garavelli, G, et al. Prevalence of home blood pressure measurement among selected hypertensive patients: results of a multicenter survey from six hospital outpatient hypertension clinics in Italy. Blood Press . (2005) 14:251–6. doi: 10.1080/08037050500210765

13. Zahid, H, Amin, A, Amin, E, Waheed, S, Asad, A, and Faheem, A. et al, Prevalence and Predictors of Use of Home Sphygmomanometers Among Hypertensive Patients. Cureus . (2017) 9:e1155. doi: 10.7759/cureus.1155

14. Anbarasan, T, Rogers, A, Rorie, DA, Grieve, K, Robert, JW, Flynn, WV, et al. Factors influencing home blood pressure monitor ownership in a large clinical trial. J Hum Hypertens . (2022) 36:325–32. doi: 10.1038/s41371-021-00511-w

15. Baral-Grant, S, Haque, MS, Nouwen, A, Greenfield, SM, and Mcmanus, RJ. Self-monitoring of blood pressure in hypertension: a UK primary care survey. Int J Hypertens . (2012) 2012:1–4. doi: 10.1155/2012/582068

16. Setia, S, Subramaniam, K, Teo, BW, and Tay, JC. Ambulatory and home blood pressure monitoring: gaps between clinical guidelines and clinical practice in Singapore. Int J Gen Med . (2017) 10:189–97. doi: 10.2147/IJGM.S138789

17. Shimbo, D, Abdalla, M, Falzon, L, Townsend, RR, and Muntner, P. Role of ambulatory and home blood pressure monitoring in clinical practice: a narrative review. Ann Intern Med . (2015) 163:691–700. doi: 10.7326/M15-1270

18. Kronish, IM, Kent, S, Moise, N, Shimbo, D, Safford, MM, Kynerd, RE, et al. Barriers to conducting ambulatory and home blood pressure monitoring during hypertension screening in the United States. J Am Soc Hypertens . (2017) 11:573–80. doi: 10.1016/j.jash.2017.06.012

19. Gondi, S, Ellis, S, Gupta, M, Ellerbeck, E, Richter, K, Burns, J, et al. Physician perceived barriers and facilitators for self-measured blood pressure monitoring-a qualitative study. PLoS One . (2021) 16:e0255578. doi: 10.1371/journal.pone.0255578

20. Elliott, R, and Timulak, L. 'Descriptive and interpretive approaches to qualitative research In: J Miles and P Gilbert, editors. A Handbook of Research Methods for Clinical and Health Psychology . Oxford: Oxford Academic (2005)

Google Scholar

21. Ajzen, I . The theory of planned behavior. Organ Behav Hum Decis Process . (1991) 50:179–211. doi: 10.1016/0749-5978(91)90020-T

22. Guest, G, Namey, E, and Chen, M. A simple method to assess and report thematic saturation in qualitative research. PLoS One . (2020) 15:e0232076–17. doi: 10.1371/journal.pone.0232076

23. Hennink, MM, Kaiser, BN, and Marconi, VC. Code saturation versus meaning saturation: how many interviews are enough? Qual Health Res . (2017) 27:591–608. doi: 10.1177/1049732316665344

24. Stergiou, GS, Palatini, P, Parati, G, O’Brien, E, Januszewicz, A, Lurbe, E, et al. 2021 European Society of Hypertension practice guidelines for office and out-of-office blood pressure measurement. J Hypertens . (2021) 39:1293–302. doi: 10.1097/HJH.0000000000002843

25. Mancusi, C, Bisogni, V, Maloberti, A, Manzi, MV, Visco, V, Biolcati, M, et al. Accuracy of home blood pressure measurement: the ACCURAPRESS study - a proposal of young Investigator Group of the Italian hypertension society (Società Italiana dell’Ipertensione arteriosa). Blood Press . (2022) 31:297–304. doi: 10.1080/08037051.2022.2137461

26. Hsu, C, Hansell, L, Ehrlich, K, Munson, S, Anderson, M, Margolis, KL, et al. Primary care physician beliefs and practices regarding blood pressure measurement: results from BP-CHECK qualitative interviews. BMC Prim Care . (2023) 24:30–10. doi: 10.1186/s12875-022-01950-1

27. Al-Rousan, T, Pesantes, MA, Dadabhai, S, Kandula, NR, Huffman, MD, Miranda, JJ, et al. Patients’ perceptions of self-management of high blood pressure in three low-and middle-income countries: findings from the BPMONITOR study. Glob Heal Epidemiol Genomics . (2020) 5:e4. doi: 10.1017/gheg.2020.5

28. Fu, SN, Dao, MC, Wong, CKH, and Cheung, BMY. Knowledge and practice of home blood pressure monitoring 6 months after the risk and assessment management programme: does health literacy matter? Postgrad Med J . (2022) 98:610–6. doi: 10.1136/postgradmedj-2020-139329

29. Muldoon, MF, Einhorn, J, Yabes, JG, Burton, D, Irizarry, T, Basse, J, et al. Randomized feasibility trial of a digital intervention for hypertension self-management. J Hum Hypertens . (2022) 36:718–25. doi: 10.1038/s41371-021-00574-9

30. Fletcher, BR, Hinton, L, Hartmann-Boyce, J, Roberts, NW, Bobrovitz, N, and McManus, RJ. Self-monitoring blood pressure in hypertension, patient and provider perspectives: a systematic review and thematic synthesis. Patient Educ Couns . (2016) 99:210–9. doi: 10.1016/j.pec.2015.08.026

31. Koopman, RJ, Wakefield, BJ, Johanning, JL, Keplinger, LE, Kruse, RL, Bomar, M, et al. Implementing home blood glucose and blood pressure telemonitoring in primary care practices for patients with diabetes: lessons learned. Telemed J E Health . (2014) 20:253–60. doi: 10.1089/tmj.2013.0188

32. Teo, SH, Chew, EAL, Ng, DWL, Tang, WE, Koh, GCH, and Teo, VHY. Implementation and use of technology-enabled blood pressure monitoring and teleconsultation in Singapore’s primary care: a qualitative evaluation using the socio-technical systems approach. BMC Prim Care . (2023) 24:71–15. doi: 10.1186/s12875-023-02014-8

33. Rodriguez, S, Hwang, K, and Wang, J. Connecting home-based self-monitoring of blood pressure data into electronic health records for hypertension care: a qualitative inquiry with primary care providers. JMIR Form Res . (2019) 3:e10388. doi: 10.2196/10388

34. Unger, T, Borghi, C, Charchar, F, Khan, NA, Poulter, NR, Prabhakaran, D, et al. International Society of Hypertension Global Hypertension Practice Guidelines. Hypertens . (2020) 75:1334–57. doi: 10.1161/HYPERTENSIONAHA.120.15026

35. NICE . Hypertension in adults: diagnosis and management NICE guideline(2019), 1–41. Available at: www.nice.org.uk/guidance/ng136

36. Parati, G, Stergiou, GS, Bilo, G, Kollias, A, Pengo, M, Ochoa, JE, et al. Home blood pressure monitoring: methodology, clinical relevance and practical application: a 2021 position paper by the working group on blood pressure monitoring and cardiovascular variability of the European Society of Hypertension. J Hypertens . (2021) 39:1742–67. doi: 10.1097/HJH.0000000000002922

37. Kario, K . Home blood pressure monitoring: current status and new developments. Am J Hypertens . (2021) 34:783–94. doi: 10.1093/ajh/hpab017

38. Lewinski, AA, Jazowski, SA, Goldstein, KM, Whitney, C, Bosworth, HB, and Zullig, LL. Intensifying approaches to address clinical inertia among cardiovascular disease risk factors: a narrative review. Patient Educ Couns . (2022) 105:3381–8. doi: 10.1016/j.pec.2022.08.005

39. Koopman, RJ, Canfield, SM, Belden, JL, Wegier, P, Shaffer, VA, Valentine, KD, et al. Home blood pressure data visualization for the management of hypertension: designing for patient and physician information needs. BMC Med Inform Decis Mak . (2020) 20. doi: 10.1186/s12911-020-01194-y

40. Smart Nation Singapore , National Steps Challenge™ & Healthy 365 App (2022). Available at: https://www.smartnation.gov.sg/initiatives/health/national-steps-challenge

41. SingHealth Group , Empowering patients to help themselves, (2023); Available at: https://www.singhealth.com.sg/rhs/news/giving-philanthropy/empowering-patients-to-help-themselves

Keywords: home blood pressure, hypertension management, primary care, family medicine, self-monitored, uncontrolled hypertension

Citation: Moosa AS, Oka P and Ng CJ (2024) Exploring primary care physicians’ challenges in using home blood pressure monitoring to manage hypertension in Singapore: a qualitative study. Front. Med . 11:1343387. doi: 10.3389/fmed.2024.1343387

Received: 23 November 2023; Accepted: 13 March 2024; Published: 25 March 2024.

Reviewed by:

Copyright © 2024 Moosa, Oka and Ng. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Aminath Shiwaza Moosa, [email protected]

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 19 March 2024

TacticAI: an AI assistant for football tactics

  • Zhe Wang   ORCID: orcid.org/0000-0002-0748-5376 1   na1 ,
  • Petar Veličković   ORCID: orcid.org/0000-0002-2820-4692 1   na1 ,
  • Daniel Hennes   ORCID: orcid.org/0000-0002-3646-5286 1   na1 ,
  • Nenad Tomašev   ORCID: orcid.org/0000-0003-1624-0220 1 ,
  • Laurel Prince 1 ,
  • Michael Kaisers 1 ,
  • Yoram Bachrach 1 ,
  • Romuald Elie 1 ,
  • Li Kevin Wenliang 1 ,
  • Federico Piccinini 1 ,
  • William Spearman 2 ,
  • Ian Graham 3 ,
  • Jerome Connor 1 ,
  • Yi Yang 1 ,
  • Adrià Recasens 1 ,
  • Mina Khan 1 ,
  • Nathalie Beauguerlange 1 ,
  • Pablo Sprechmann 1 ,
  • Pol Moreno 1 ,
  • Nicolas Heess   ORCID: orcid.org/0000-0001-7876-9256 1 ,
  • Michael Bowling   ORCID: orcid.org/0000-0003-2960-8418 4 ,
  • Demis Hassabis 1 &
  • Karl Tuyls   ORCID: orcid.org/0000-0001-7929-1944 5  

Nature Communications volume  15 , Article number:  1906 ( 2024 ) Cite this article

26k Accesses

255 Altmetric

Metrics details

  • Computational science
  • Information technology

Identifying key patterns of tactics implemented by rival teams, and developing effective responses, lies at the heart of modern football. However, doing so algorithmically remains an open research challenge. To address this unmet need, we propose TacticAI, an AI football tactics assistant developed and evaluated in close collaboration with domain experts from Liverpool FC. We focus on analysing corner kicks, as they offer coaches the most direct opportunities for interventions and improvements. TacticAI incorporates both a predictive and a generative component, allowing the coaches to effectively sample and explore alternative player setups for each corner kick routine and to select those with the highest predicted likelihood of success. We validate TacticAI on a number of relevant benchmark tasks: predicting receivers and shot attempts and recommending player position adjustments. The utility of TacticAI is validated by a qualitative study conducted with football domain experts at Liverpool FC. We show that TacticAI’s model suggestions are not only indistinguishable from real tactics, but also favoured over existing tactics 90% of the time, and that TacticAI offers an effective corner kick retrieval system. TacticAI achieves these results despite the limited availability of gold-standard data, achieving data efficiency through geometric deep learning.

Similar content being viewed by others

methods of data analysis for qualitative research

Artificial intelligence and illusions of understanding in scientific research

Lisa Messeri & M. J. Crockett

methods of data analysis for qualitative research

Highly accurate protein structure prediction with AlphaFold

John Jumper, Richard Evans, … Demis Hassabis

methods of data analysis for qualitative research

De novo design of protein structure and function with RFdiffusion

Joseph L. Watson, David Juergens, … David Baker

Introduction

Association football, or simply football or soccer, is a widely popular and highly professionalised sport, in which two teams compete to score goals against each other. As each football team comprises up to 11 active players at all times and takes place on a very large pitch (also known as a soccer field), scoring goals tends to require a significant degree of strategic team-play. Under the rules codified in the Laws of the Game 1 , this competition has nurtured an evolution of nuanced strategies and tactics, culminating in modern professional football leagues. In today’s play, data-driven insights are a key driver in determining the optimal player setups for each game and developing counter-tactics to maximise the chances of success 2 .

When competing at the highest level the margins are incredibly tight, and it is increasingly important to be able to capitalise on any opportunity for creating an advantage on the pitch. To that end, top-tier clubs employ diverse teams of coaches, analysts and experts, tasked with studying and devising (counter-)tactics before each game. Several recent methods attempt to improve tactical coaching and player decision-making through artificial intelligence (AI) tools, using a wide variety of data types from videos to tracking sensors and applying diverse algorithms ranging from simple logistic regression to elaborate neural network architectures. Such methods have been employed to help predict shot events from videos 3 , forecast off-screen movement from spatio-temporal data 4 , determine whether a match is in-play or interrupted 5 , or identify player actions 6 .

The execution of agreed-upon plans by players on the pitch is highly dynamic and imperfect, depending on numerous factors including player fitness and fatigue, variations in player movement and positioning, weather, the state of the pitch, and the reaction of the opposing team. In contrast, set pieces provide an opportunity to exert more control on the outcome, as the brief interruption in play allows the players to reposition according to one of the practiced and pre-agreed patterns, and make a deliberate attempt towards the goal. Examples of such set pieces include free kicks, corner kicks, goal kicks, throw-ins, and penalties 2 .

Among set pieces, corner kicks are of particular importance, as an improvement in corner kick execution may substantially modify game outcomes, and they lend themselves to principled, tactical and detailed analysis. This is because corner kicks tend to occur frequently in football matches (with ~10 corners on average taking place in each match 7 ), they are taken from a fixed, rigid position, and they offer an immediate opportunity for scoring a goal—no other set piece simultaneously satisfies all of the above. In practice, corner kick routines are determined well ahead of each match, taking into account the strengths and weaknesses of the opposing team and their typical tactical deployment. It is for this reason that we focus on corner kick analysis in particular, and propose TacticAI, an AI football assistant for supporting the human expert with set piece analysis, and the development and improvement of corner kick routines.

TacticAI is rooted in learning efficient representations of corner kick tactics from raw, spatio-temporal player tracking data. It makes efficient use of this data by representing each corner kick situation as a graph—a natural representation for modelling relationships between players (Fig.  1 A, Table  2 ), and these player relationships may be of higher importance than the absolute distances between them on the pitch 8 . Such a graph input is a natural candidate for graph machine learning models 9 , which we employ within TacticAI to obtain high-dimensional latent player representations. In the Supplementary Discussion section, we carefully contrast TacticAI against prior art in the area.

figure 1

A How corner kick situations are converted to a graph representation. Each player is treated as a node in a graph, with node, edge and graph features extracted as detailed in the main text. Then, a graph neural network operates over this graph by performing message passing; each node’s representation is updated using the messages sent to it from its neighbouring nodes. B How TacticAI processes a given corner kick. To ensure that TacticAI’s answers are robust in the face of horizontal or vertical reflections, all possible combinations of reflections are applied to the input corner, and these four views are then fed to the core TacticAI model, where they are able to interact with each other to compute the final player representations—each internal blue arrow corresponds to a single message passing layer from ( A ). Once player representations are computed, they can be used to predict the corner’s receiver, whether a shot has been taken, as well as assistive adjustments to player positions and velocities, which increase or decrease the probability of a shot being taken.

Uniquely, TacticAI takes advantage of geometric deep learning 10 to explicitly produce player representations that respect several symmetries of the football pitch (Fig.  1 B). As an illustrative example, we can usually safely assume that under a horizontal or vertical reflection of the pitch state, the game situation is equivalent. Geometric deep learning ensures that TacticAI’s player representations will be identically computed under such reflections, such that this symmetry does not have to be learnt from data. This proves to be a valuable addition, as high-quality tracking data is often limited—with only a few hundred matches played each year in every league. We provide an in-depth overview of how we employ geometric deep learning in TacticAI in the “Methods” section.

From these representations, TacticAI is then able to answer various predictive questions about the outcomes of a corner—for example, which player is most likely to make first contact with the ball, or whether a shot will take place. TacticAI can also be used as a retrieval system—for mining similar corner kick situations based on the similarity of player representations—and a generative recommendation system, suggesting adjustments to player positions and velocities to maximise or minimise the estimated shot probability. Through several experiments within a case study with domain expert coaches and analysts from Liverpool FC, the results of which we present in the next section, we obtain clear statistical evidence that TacticAI readily provides useful, realistic and accurate tactical suggestions.

To demonstrate the diverse qualities of our approach, we design TacticAI with three distinct predictive and generative components: receiver prediction, shot prediction, and tactic recommendation through guided generation, which also correspond to the benchmark tasks for quantitatively evaluating TacticAI. In addition to providing accurate quantitative insights for corner kick analysis with its predictive components, the interplay between TacticAI’s predictive and generative components allows coaches to sample alternative player setups for each routine of interest, and directly evaluate the possible outcomes of such alternatives.

We will first describe our quantitative analysis, which demonstrates that TacticAI’s predictive components are accurate at predicting corner kick receivers and shot situations on held-out test corners and that the proposed player adjustments do not strongly deviate from ground-truth situations. However, such an analysis only gives an indirect insight into how useful TacticAI would be once deployed. We tackle this question of utility head-on and conduct a comprehensive case study in collaboration with our partners at Liverpool FC—where we directly ask human expert raters to judge the utility of TacticAI’s predictions and player adjustments. The following sections expand on the specific results and analysis we have performed.

In what follows, we will describe TacticAI’s components at a minimal level necessary to understand our evaluation. We defer detailed descriptions of TacticAI’s components to the “Methods” section. Note that, all our error bars reported in this research are standard deviations.

Benchmarking TacticAI

We evaluate the three components of TacticAI on a relevant benchmark dataset of corner kicks. Our dataset consists of 7176 corner kicks from the 2020 to 2021 Premier League seasons, which we randomly shuffle and split into a training (80%) and a test set (20%). As previously mentioned, TacticAI operates on graphs. Accordingly, we represent each corner kick situation as a graph, where each node corresponds to a player. The features associated with each node encode the movements (velocities and positions) and simple profiles (heights and weights) of on-pitch players at the timestamp when the corresponding corner kick was being taken by the attacking kicker (see the “Methods” section), and no information of ball movement was encoded. The graphs are fully connected; that is, for every pair of players, we will include the edge connecting them in the graph. Each of these edges encodes a binary feature, indicating whether the two players are on opposing teams or not. For each task, we generated the relevant dataset of node/edge/graph features and corresponding labels (Tables  1 and 2 , see the “Methods” section). The components were then trained separately with their corresponding corner kick graphs. In particular, we only employ a minimal set of features to construct the corner kick graphs, without encoding the movements of the ball nor explicitly encoding the distances between players into the graphs. We used a consistent training-test split for all benchmark tasks, as this made it possible to benchmark not only the individual components but also their interactions.

Accurate receiver and shot prediction through geometric deep learning

One of TacticAI’s key predictive models forecasts the receiver out of the 22 on-pitch players. The receiver is defined as the first player touching the ball after the corner is taken. In our evaluation, all methods used the same set of features (see the “Receiver prediction” entry in Table  1 and the “Methods” section). We leveraged the receiver prediction task to benchmark several different TacticAI base models. Our best-performing model—achieving 0.782 ± 0.039 in top-3 test accuracy after 50,000 training steps—was a deep graph attention network 11 , 12 , leveraging geometric deep learning 10 through the use of D 2 group convolutions 13 . We supplement this result with a detailed ablation study, verifying that both our choice of base architecture and group convolution yielded significant improvements in the receiver prediction task (Supplementary Table  2 , see the subsection “Ablation study” in the “Methods” section). Considering that corner kick receiver prediction is a highly challenging task with many factors that are unseen by our model—including fatigue and fitness levels, and actual ball trajectory—we consider TacticAI’s top-3 accuracy to reflect a high level of predictive power, and keep the base TacticAI architecture fixed for subsequent studies. In addition to this quantitative evaluation with the evaluation dataset, we also evaluate the performance of TacticAI’s receiver prediction component in a case study with human raters. Please see the “Case study with expert raters” section for more details.

For shot prediction, we observe that reusing the base TacticAI architecture to directly predict shot events—i.e., directly modelling the probability \({\mathbb{P}}(\,{{\mbox{shot}}}| {{\mbox{corner}}}\,)\) —proved challenging, only yielding a test F 1 score of 0.52 ± 0.03, for a GATv2 base model. Note that here we use the F 1 score—the harmonic mean of precision and recall—as it is commonly used in binary classification problems over imbalanced datasets, such as shot prediction. However, given that we already have a potent receiver predictor, we decided to use its output to give us additional insight into whether or not a shot had been taken. Hence, we opted to decompose the probability of taking a shot as

where \({\mathbb{P}}(\,{{\mbox{receiver}}}| {{\mbox{corner}}}\,)\) are the probabilities computed by TacticAI’s receiver prediction system, and \({\mathbb{P}}(\,{{\mbox{shot}}}| {{\mbox{receiver}}},{{\mbox{corner}}}\,)\) models the conditional shot probability after a specific player makes first contact with the ball. This was implemented through providing an additional global feature to indicate the receiver in the corresponding corner kick (Table  1 ) while the architecture otherwise remained the same as that of receiver prediction (Supplementary Fig.  2 , see the “Methods” section). At training time, we feed the ground-truth receiver as input to the model—at inference time, we attempt every possible receiver, weighing their contributions using the probabilities given by TacticAI’s receiver predictor, as per Eq. ( 1 ). This two-phased approach yielded a final test F 1 score of 0.68 ± 0.04 for shot prediction, which encodes significantly more signal than the unconditional shot predictor, especially considering the many unobservables associated with predicting shot events. Just as for receiver prediction, this performance can be further improved using geometric deep learning; a conditional GATv2 shot predictor with D 2 group convolutions achieves an F 1 score of 0.71 ± 0.01.

Moreover, we also observe that, even just through predicting the receivers, without explicitly classifying any other salient features of corners, TacticAI learned generalisable representations of the data. Specifically, team setups with similar tactical patterns tend to cluster together in TacticAI’s latent space (Fig.  2 ). However, no clear clusters are observed in the raw input space (Supplementary Fig.  1 ). This indicates that TacticAI can be leveraged as a useful corner kick retrieval system, and we will present our evaluation of this hypothesis in the “Case study with expert raters” section.

figure 2

We visualise the latent representations of attacking and defending teams in 1024 corner kicks using t -SNE. A latent team embedding in one corner kick sample is the mean of the latent player representations on the same attacking ( A – C ) or defending ( D ) team. Given the reference corner kick sample ( A ), we retrieve another corner kick sample ( B ) with respect to the closest distance of their representations in the latent space. We observe that ( A ) and ( B ) are both out-swing corner kicks and share similar patterns of their attacking tactics, which are highlighted with rectangles having the same colours, although they bear differences with respect to the absolute positions and velocities of the players. All the while, the latent representation of an in-swing attack ( C ) is distant from both ( A ) and ( B ) in the latent space. The red arrows are only used to demonstrate the difference between in- and out-swing corner kicks, not the actual ball trajectories.

Lastly, it is worth emphasising that the utility of the shot predictor likely does not come from forecasting whether a shot event will occur—a challenging problem with many imponderables—but from analysing the difference in predicted shot probability across multiple corners. Indeed, in the following section, we will show how TacticAI’s generative tactic refinements can directly influence the predicted shot probabilities, which will then corresponds to highly favourable evaluation by our expert raters in the “Case study with expert raters” section.

Controlled tactic refinement using class-conditional generative models

Equipped with components that are able to potently relate corner kicks with their various outcomes (e.g. receivers and shot events), we can explore the use of TacticAI to suggest adjustments of tactics, in order to amplify or reduce the likelihood of certain outcomes.

Specifically, we aim to produce adjustments to the movements of players on one of the two teams, including their positions and velocities, which would maximise or minimise the probability of a shot event, conditioned on the initial corner setup, consisting of the movements of players on both teams and their heights and weights. In particular, although in real-world scenarios both teams may react simultaneously to the movements of each other, in our study, we focus on moderate adjustments to player movements, which help to detect players that are not responding to a tactic properly. Due to this reason, we simplify the process of tactic refinement through generating the adjustments for only one team while keeping the other fixed. The way we train a model for this task is through an auto-encoding objective: we feed the ground-truth shot outcome (a binary indicator) as an additional graph-level feature to TacticAI’s model (Table  1 ), and then have it learn to reconstruct a probability distribution of the input player coordinates (Fig.  1 B, also see the “Methods” section). As a consequence, our tactic adjustment system does not depend on the previously discussed shot predictor—although we can use the shot predictor to evaluate whether the adjustments make a measurable difference in shot probability.

This autoencoder-based generative model is an individual component that separates from TacticAI’s predictive systems. All three systems share the encoder architecture (without sharing parameters), but use different decoders (see the “Methods” section). At inference time, we can instead feed in a desired shot outcome for the given corner setup, and then sample new positions and velocities for players on one team using this probability distribution. This setup, in principle, allows for flexible downstream use, as human coaches can optimise corner kick setups through generating adjustments conditioned on the specific outcomes of their interest—e.g., increasing shot probability for the attacking team, decreasing it for the defending team (Fig.  3 ) or amplifying the chance that a particular striker receives the ball.

figure 3

TacticAI makes it possible for human coaches to redesign corner kick tactics in ways that help maximise the probability of a positive outcome for either the attacking or the defending team by identifying key players, as well as by providing temporally coordinated tactic recommendations that take all players into consideration. As demonstrated in the present example ( A ), for a corner kick in which there was a shot attempt in reality ( B ), TacticAI can generate a tactically-adjusted setting in which the shot probability has been reduced, by adjusting the positioning of the defenders ( D ). The suggested defender positions result in reduced receiver probability for attacking players 2–5 (see bottom row), while the receiver probability of Attacker 1, who is distant from the goalpost, has been increased ( C ). The model is capable of generating multiple such scenarios. Coaches can inspect the different options visually and additionally consult TacticAI’s quantitative analysis of the presented tactics.

We first evaluate the generated adjustments quantitatively, by verifying that they are indistinguishable from the original corner kick distribution using a classifier. To do this, we synthesised a dataset consisting of 200 corner kick samples and their corresponding conditionally generated adjustments. Specifically, for corners without a shot event, we generated adjustments for the attacking team by setting the shot event feature to 1, and vice-versa for the defending team when a shot event did happen. We found that the real and generated samples were not distinguishable by an MLP classifier, with an F 1 score of 0.53 ± 0.05, indicating random chance level accuracy. This result indicates that the adjustments produced by TacticAI are likely similar enough to real corner kicks that the MLP is unable to tell them apart. Note that, in spite of this similarity, TacticAI recommends player-level adjustments that are not negligible—in the following section we will illustrate several salient examples of this. To more realistically validate the practical indistinguishability of TacticAI’s adjustments from realistic corners, we also evaluated the realism of the adjustments in a case study with human experts, which we will present in the following section.

In addition, we leveraged our TacticAI shot predictor to estimate whether the proposed adjustments were effective. We did this by analysing 100 corner kick samples in which threatening shots occurred, and then, for each sample, generated one defensive refinement through setting the shot event feature to 0. We observed that the average shot probability significantly decreased, from 0.75 ± 0.14 for ground-truth corners to 0.69 ± 0.16 for adjustments ( z  = 2.62,  p  < 0.001). This observation was consistent when testing for attacking team refinements (shot probability increased from 0.18 ± 0.16 to 0.31 ± 0.26 ( z  = −4.46,  p  < 0.001)). Moving beyond this result, we also asked human raters to assess the utility of TacticAI’s proposed adjustments within our case study, which we detail next.

Case study with expert raters

Although quantitative evaluation with well-defined benchmark datasets was critical for the technical development of TacticAI, the ultimate test of TacticAI as a football tactic assistant is its practical downstream utility being recognised by professionals in the industry. To this end, we evaluated TacticAI through a case study with our partners at Liverpool FC (LFC). Specifically, we invited a group of five football experts: three data scientists, one video analyst, and one coaching assistant. Each of them completed four tasks in the case study, which evaluated the utility of TacticAI’s components from several perspectives; these include (1) the realism of TacticAI’s generated adjustments, (2) the plausibility of TacticAI’s receiver predictions, (3) effectiveness of TacticAI’s embeddings for retrieving similar corners, and (4) usefulness of TacticAI’s recommended adjustments. We provide an overview of our study’s results here and refer the interested reader to Supplementary Figs.  3 – 5 and the  Supplementary Methods for additional details.

We first simultaneously evaluated the realism of the adjusted corner kicks generated by TacticAI, and the plausibility of its receiver predictions. Going through a collection of 50 corner kick samples, we first asked the raters to classify whether a given sample was real or generated by TacticAI, and then they were asked to identify the most likely receivers in the corner kick sample (Supplementary Fig.  3 ).

On the task of classifying real and generated samples, first, we found that the raters’ average F 1 score of classifying the real vs. generated samples was only 0.60 ± 0.04, with individual F 1 scores ( \({F}_{1}^{A}=0.54,{F}_{1}^{B}=0.64,{F}_{1}^{C}=0.65,{F}_{1}^{D}=0.62,{F}_{1}^{E}=0.56\) ), indicating that the raters were, in many situations, unable to distinguish TacticAI’s adjustments from real corners.

The previous evaluation focused on analysing realism detection performance across raters. We also conduct a study that analyses realism detection across samples. Specifically, we assigned ratings for each sample—assigning +1 to a sample if it was identified as real by a human rater, and 0 otherwise—and computed the average rating for each sample across the five raters. Importantly, by studying the distribution of ratings, we found that there was no significant difference between the average ratings assigned to real and generated corners ( z  = −0.34,  p  > 0.05) (Fig.  4 A). Hence, the real and generated samples were assigned statistically indistinguishable average ratings by human raters.

figure 4

In task 1, we tested the statistical difference between the real corner kick samples and the synthetic ones generated by TacticAI from two aspects: ( A.1 ) the distributions of their assigned ratings, and ( A.2 ) the corresponding histograms of the rating values. Analogously, in task 2 (receiver prediction), ( B.1 ) we track the distributions of the top-3 accuracy of receiver prediction using those samples, and ( B.2 ) the corresponding histogram of the mean rating per sample. No statistical difference in the mean was observed in either cases (( A.1 ) ( z  = −0.34,  p  > 0.05), and ( B.1 ) ( z  = 0.97,  p  > 0.05)). Additionally, we observed a statistically significant difference between the ratings of different raters on receiver prediction, with three clear clusters emerging ( C ). Specifically, Raters A and E had similar ratings ( z  = 0.66,  p  > 0.05), and Raters B and D also rated in similar ways ( z  = −1.84,  p  > 0.05), while Rater C responded differently from all other raters. This suggests a good level of variety of the human raters with respect to their perceptions of corner kicks. In task 3—identifying similar corners retrieved in terms of salient strategic setups—there were no significant differences among the distributions of the ratings by different raters ( D ), suggesting a high level of agreement on the usefulness of TacticAI’s capability of retrieving similar corners ( F 1,4  = 1.01,  p  > 0.1). Finally, in task 4, we compared the ratings of TacticAI’s strategic refinements across the human raters ( E ) and found that the raters also agreed on the general effectiveness of the refinements recommended by TacticAI ( F 1,4  = 0.45,  p  > 0.05). Note that the violin plots used in B.1 and C – E model a continuous probability distribution and hence assign nonzero probabilities to values outside of the allowed ranges. We only label y -axis ticks for the possible set of ratings.

For the task of identifying receivers, we rated TacticAI’s predictions with respect to a rater as +1 if at least one of the receivers identified by the rater appeared in TacticAI’s top-3 predictions, and 0 otherwise. The average top-3 accuracy among the human raters was 0.79 ± 0.18; specifically, 0.81 ± 0.17 for the real samples, and 0.77 ± 0.21 for the generated ones. These scores closely line up with the accuracy of TacticAI in predicting receivers for held-out test corners, validating our quantitative study. Further, after averaging the ratings for receiver prediction sample-wise, we found no statistically significant difference between the average ratings of predicting receivers over the real and generated samples ( z  = 0.97,  p  > 0.05) (Fig.  4 B). This indicates that TacticAI was equally performant in predicting the receivers of real corners and TacticAI-generated adjustments, and hence may be leveraged for this purpose even in simulated scenarios.

There is a notably high variance in the average receiver prediction rating of TacticAI. We hypothesise that this is due to the fact that different raters may choose to focus on different salient features when evaluating the likely receivers (or even the amount of likely receivers). We set out to validate this hypothesis by testing the pair-wise similarity of the predictions by the human raters through running a one-away analysis of variance (ANOVA), followed by a Tukey test. We found that the distributions of the five raters’ predictions were significantly different ( F 1,4  = 14.46,  p  < 0.001) forming three clusters (Fig.  4 C). This result indicates that different human raters—as suggested by their various titles at LFC—may often use very different leads when suggesting plausible receivers. The fact that TacticAI manages to retain a high top-3 accuracy in such a setting suggests that it was able to capture the salient patterns of corner kick strategies, which broadly align with human raters’ preferences. We will further test this hypothesis in the third task—identifying similar corners.

For the third task, we asked the human raters to judge 50 pairs of corners for their similarity. Each pair consisted of a reference corner and a retrieved corner, where the retrieved corner was chosen either as the nearest-neighbour of the reference in terms of their TacticAI latent space representations, or—as a feature-level heuristic—the cosine similarities of their raw features (Supplementary Fig.  4 ) in our corner kick dataset. We score the raters’ judgement of a pair as +1 if they considered the corners presented in the case to be usefully similar, otherwise, the pair is scored with 0. We first computed, for each rater, the recall with which they have judged a baseline- or TacticAI-retrieved pair as usefully similar—see description of Task 3 in the  Supplementary Methods . For TacticAI retrievals, the average recall across all raters was 0.59 ± 0.09, and for the baseline system, the recall was 0.36 ± 0.10. Secondly, we assess the statistical difference between the results of the two methods by averaging the ratings for each reference–retrieval pair, finding that the average rating of TacticAI retrievals is significantly higher than the average rating of baseline method retrievals ( z  = 2.34,  p  < 0.05). These two results suggest that TacticAI significantly outperforms the feature-space baseline as a method for mining similar corners. This indicates that TacticAI is able to extract salient features from corners that are not trivial to extract from the input data alone, reinforcing it as a potent tool for discovering opposing team tactics from available data. Finally, we observed that this task exhibited a high level of inter-rater agreement for TacticAI-retrieved pairs ( F 1,4  = 1.01,  p  > 0.1) (Fig.  4 D), suggesting that human raters were largely in agreement with respect to their assessment of TacticAI’s performance.

Finally, we evaluated TacticAI’s player adjustment recommendations for their practical utility. Specifically, each rater was given 50 tactical refinements together with the corresponding real corner kick setups—see Supplementary Fig.  5 , and the “Case study design” section in the  Supplementary Methods . The raters were then asked to rate each refinement as saliently improving the tactics (+1), saliently making them worse (−1), or offering no salient differences (0). We calculated the average rating assigned by each of the raters (giving us a value in the range [− 1, 1] for each rater). The average of these values across all five raters was 0.7 ± 0.1. Further, for 45 of the 50 situations (90%), the human raters found TacticAI’s suggestion to be favourable on average (by majority voting). Both of these results indicate that TacticAI’s recommendations are salient and useful to a downstream football club practitioner, and we set out to validate this with statistical tests.

We performed statistical significance testing of the observed positive ratings. First, for each of the 50 situations, we averaged its ratings across all five raters and then ran a t -test to assess whether the mean rating was significantly larger than zero. Indeed, the statistical test indicated that the tactical adjustments recommended by TacticAI were constructive overall ( \({t}_{49}^{{{{{{{{\rm{avg}}}}}}}}}=9.20,\, p \, < \, 0.001\) ). Secondly, we verified that each of the five raters individually found TacticAI’s recommendations to be constructive, running a t -test on each of their ratings individually. For all of the five raters, their average ratings were found to be above zero with statistical significance ( \({t}_{49}^{A}=5.84,\, {p}^{A} \, < \, 0.001;{t}_{49}^{B}=7.88,\; {p}^{B} \, < \, 0.001;{t}_{49}^{C}=7.00,\; {p}^{C} \, < \, 0.001;{t}_{49}^{D}=6.04,\; {p}^{D} \, < \, 0.001;{t}_{49}^{E}=7.30,\, {p}^{E} \, < \, 0.001\) ). In addition, their ratings also shared a high level of inter-agreement ( F 1,4  = 0.45,  p  > 0.05) (Fig.  4 E), suggesting a level of practical usefulness that is generally recognised by human experts, even though they represent different backgrounds.

Taking all of these results together, we find TacticAI to possess strong components for prediction, retrieval, and tactical adjustments on corner kicks. To illustrate the kinds of salient recommendations by TacticAI, in Fig.  5 we present four examples with a high degree of inter-rater agreement.

figure 5

These examples are selected from our case study with human experts, to illustrate the breadth of tactical adjustments that TacticAI suggests to teams defending a corner. The density of the yellow circles coincides with the number of times that the corresponding change is recognised as constructive by human experts. Instead of optimising the movement of one specific player, TacticAI can recommend improvements for multiple players in one generation step through suggesting better positions to block the opposing players, or better orientations to track them more efficiently. Some specific comments from expert raters follow. In A , according to raters, TacticAI suggests more favourable positions for several defenders, and improved tracking runs for several others—further, the goalkeeper is positioned more deeply, which is also beneficial. In B , TacticAI suggests that the defenders furthest away from the corner make improved covering runs, which was unanimously deemed useful, with several other defenders also positioned more favourably. In C , TacticAI recommends improved covering runs for a central group of defenders in the penalty box, which was unanimously considered salient by our raters. And in D , TacticAI suggests substantially better tracking runs for two central defenders, along with a better positioning for two other defenders in the goal area.

We have demonstrated an AI assistant for football tactics and provided statistical evidence of its efficacy through a comprehensive case study with expert human raters from Liverpool FC. First, TacticAI is able to accurately predict the first receiver after a corner kick is taken as well as the probability of a shot as the direct result of the corner. Second, TacticAI has been shown to produce plausible tactical variations that improve outcomes in a salient way, while being indistinguishable from real scenarios by domain experts. And finally, the system’s latent player representations are a powerful means to retrieve similar set-piece tactics, allowing coaches to analyse relevant tactics and counter-tactics that have been successful in the past.

The broader scope of strategy modelling in football has previously been addressed from various individual angles, such as pass prediction 14 , 15 , 16 , shot prediction 3 or corner kick tactical classification 7 . However, to the best of our knowledge, our work stands out by combining and evaluating predictive and generative modelling of corner kicks for tactic development. It also stands out in its method of applying geometric deep learning, allowing for efficiently incorporating various symmetries of the football pitch for improved data efficiency. Our method incorporates minimal domain knowledge and does not rely on intricate feature engineering—though its factorised design naturally allows for more intricate feature engineering approaches when such features are available.

Our methodology requires the position and velocity estimates of all players at the time of execution of the corner and subsequent events. Here, we derive these from high-quality tracking and event data, with data availability from tracking providers limited to top leagues. Player tracking based on broadcast video would increase the reach and training data substantially, but would also likely result in noisier model inputs. While the attention mechanism of GATs would allow us to perform introspection of the most salient factors contributing to the model outcome, our method does not explicitly model exogenous (aleatoric) uncertainty, which would be valuable context for the football analyst.

While the empirical study of our method’s efficacy has been focused on corner kicks in association football, it readily generalises to other set pieces (such as throw-ins, which similarly benefit from similarity retrieval, pass and/or shot prediction) and other team sports with suspended play situations. The learned representations and overall framing of TacticAI also lay the ground for future research to integrate a natural language interface that enables domain-grounded conversations with the assistant, with the aim to retrieve particular situations of interest, make predictions for a given tactical variant, compare and contrast, and guide through an interactive process to derive tactical suggestions. It is thus our belief that TacticAI lays the groundwork for the next-generation AI assistant for football.

We devised TacticAI as a geometric deep learning pipeline, further expanded in this section. We process labelled spatio-temporal football data into graph representations, and train and evaluate on benchmarking tasks cast as classification or regression. These steps are presented in sequence, followed by details on the employed computational architecture.

Raw corner kick data

The raw dataset consisted of 9693 corner kicks collected from the 2020–21, 2021–22, and 2022–23 (up to January 2023) Premier League seasons. The dataset was provided by Liverpool FC and comprises four separate data sources, described below.

Our primary data source is spatio-temporal trajectory frames (tracking data), which tracked all on-pitch players and the ball, for each match, at 25 frames per second. In addition to player positions, their velocities are derived from position data through filtering. For each corner kick, we only used the frame in which the kick is being taken as input information.

Secondly, we also leverage event stream data, which annotated the events or actions (e.g., passes, shots and goals) that have occurred in the corresponding tracking frames.

Thirdly, the line-up data for the corresponding games, which recorded the players’ profiles, including their heights, weights and roles, is also used.

Lastly, we have access to miscellaneous game data, which contains the game days, stadium information, and pitch length and width in meters.

Graph representation and construction

We assumed that we were provided with an input graph \({{{{{{{\mathcal{G}}}}}}}}=({{{{{{{\mathcal{V}}}}}}}},\,{{{{{{{\mathcal{E}}}}}}}})\) with a set of nodes \({{{{{{{\mathcal{V}}}}}}}}\) and edges \({{{{{{{\mathcal{E}}}}}}}}\subseteq {{{{{{{\mathcal{V}}}}}}}}\times {{{{{{{\mathcal{V}}}}}}}}\) . Within the context of football games, we took \({{{{{{{\mathcal{V}}}}}}}}\) to be the set of 22 players currently on the pitch for both teams, and we set \({{{{{{{\mathcal{E}}}}}}}}={{{{{{{\mathcal{V}}}}}}}}\times {{{{{{{\mathcal{V}}}}}}}}\) ; that is, we assumed all pairs of players have the potential to interact. Further analyses, leveraging more specific choices of \({{{{{{{\mathcal{E}}}}}}}}\) , would be an interesting avenue for future work.

Additionally, we assume that the graph is appropriately featurised. Specifically, we provide a node feature matrix, \({{{{{{{\bf{X}}}}}}}}\in {{\mathbb{R}}}^{| {{{{{{{\mathcal{V}}}}}}}}| \times k}\) , an edge feature tensor, \({{{{{{{\bf{E}}}}}}}}\in {{\mathbb{R}}}^{| {{{{{{{\mathcal{V}}}}}}}}| \times | {{{{{{{\mathcal{V}}}}}}}}| \times l}\) , and a graph feature vector, \({{{{{{{\bf{g}}}}}}}}\in {{\mathbb{R}}}^{m}\) . The appropriate entries of these objects provide us with the input features for each node, edge, and graph. For example, \({{{{{{{{\bf{x}}}}}}}}}_{u}\in {{\mathbb{R}}}^{k}\) would provide attributes of an individual player \(u\in {{{{{{{\mathcal{V}}}}}}}}\) , such as position, height and weight, and \({{{{{{{{\bf{e}}}}}}}}}_{uv}\in {{\mathbb{R}}}^{l}\) would provide the attributes of a particular pair of players \((u,\, v)\in {{{{{{{\mathcal{E}}}}}}}}\) , such as their distance, and whether they belong to the same team. The graph feature vector, g , can be used to store global attributes of interest to the corner kick, such as the game time, current score, or ball position. For a simplified visualisation of how a graph neural network would process such an input, refer to Fig.  1 A.

To construct the input graphs, we first aligned the four data sources with respect to their game IDs and timestamps and filtered out 2517 invalid corner kicks, for which the alignment failed due to missing data, e.g., missing tracking frames or event labels. This filtering yielded 7176 valid corner kicks for training and evaluation. We summarised the exact information that was used to construct the input graphs in Table  2 . In particular, other than player heights (measured in centimeters (cm)) and weights (measured in kilograms (kg)), the players were anonymous in the model. For the cases in which the player profiles were missing, we set their heights and weights to 180 cm and 75 kg, respectively, as defaults. In total, we had 385 such occurrences out of a total of 213,246( = 22 × 9693) during data preprocessing. We downscaled the heights and weights by a factor of 100. Moreover, for each corner kick, we zero-centred the positions of on-pitch players and normalised them onto a 10 m × 10 m pitch, and their velocities were re-scaled accordingly. For the cases in which the pitch dimensions were missing, we used a standard pitch dimension of 110 m × 63 m as default.

We summarised the grouping of the features in Table  1 . The actual features used in different benchmark tasks may differ, and we will describe this in more detail in the next section. To focus on modelling the high-level tactics played by the attacking and defending teams, other than a binary indicator for ball possession—which is 1 for the corner kick taker and 0 for all other players—no information of ball movement, neither positions nor velocities, was used to construct the input graphs. Additionally, we do not have access to the player’s vertical movement, therefore only information on the two-dimensional movements of each player is provided in the data. We do however acknowledge that such information, when available, would be interesting to consider in a corner kick outcome predictor, considering the prevalence of aerial battles in corners.

Benchmark tasks construction

TacticAI consists of three predictive and generative models, which also correspond to three benchmark tasks implemented in this study. Specifically, (1) Receiver prediction, (2) Threatening shot prediction, and (3) Guided generation of team positions and velocities (Table  1 ). The graphs of all the benchmark tasks used the same feature space of nodes and edges, differing only in the global features.

For all three tasks, our models first transform the node features to a latent node feature matrix, \({{{{{{{\bf{H}}}}}}}}={f}_{{{{{{{{\mathcal{G}}}}}}}}}({{{{{{{\bf{X}}}}}}}},\, {{{{{{{\bf{E}}}}}}}},\, {{{{{{{\bf{g}}}}}}}})\) , from which we could answer queries: either about individual players—in which case we learned a relevant classifier or regressor over the h u vectors (the rows of H )—or about the occurrence of a global event (e.g. shot taken)—in which case we classified or regressed over the aggregated player vectors, ∑ u h u . In both cases, the classifiers were trained using stochastic gradient descent over an appropriately chosen loss function, such as categorical cross-entropy for classifiers, and mean squared error for regressors.

For different tasks, we extracted the corresponding ground-truth labels from either the event stream data or the tracking data. Specifically, (1) We modelled receiver prediction as a node classification task and labelled the first player to touch the ball after the corner was taken as the target node. This player could be either an attacking or defensive player. (2) Shot prediction was modelled as graph classification. In particular, we considered a next-ball-touch action by the attacking team as a shot if it was a direct corner, a goal, an aerial, hit on the goalposts, a shot attempt saved by the goalkeeper, or missing target. This yielded 1736 corners labelled as a shot being taken, and 5440 corners labelled as a shot not being taken. (3) For guided generation of player position and velocities, no additional label was needed, as this model relied on a self-supervised reconstruction objective.

The entire dataset was split into training and evaluation sets with an 80:20 ratio through random sampling, and the same splits were used for all tasks.

Graph neural networks

The central model of TacticAI is the graph neural network (GNN) 9 , which computes latent representations on a graph by repeatedly combining them within each node’s neighbourhood. Here we define a node’s neighbourhood, \({{{{{{{{\mathcal{N}}}}}}}}}_{u}\) , as the set of all first-order neighbours of node u , that is, \({{{{{{{{\mathcal{N}}}}}}}}}_{u}=\{v\,| \,(v,\, u)\in {{{{{{{\mathcal{E}}}}}}}}\}\) . A single GNN layer then transforms the node features by passing messages between neighbouring nodes 17 , following the notation of related work 10 , and the implementation of the CLRS-30 benchmark baselines 18 :

where \(\psi :{{\mathbb{R}}}^{k}\times {{\mathbb{R}}}^{k}\times {{\mathbb{R}}}^{l}\times {{\mathbb{R}}}^{m}\to {{\mathbb{R}}}^{{k}^{{\prime} }}\) and \(\phi :{{\mathbb{R}}}^{k}\times {{\mathbb{R}}}^{{k}^{{\prime} }}\to {{\mathbb{R}}}^{{k}^{{\prime} }}\) are two learnable functions (e.g. multilayer perceptrons), \({{{{{{{{\bf{h}}}}}}}}}_{u}^{(t)}\) are the features of node u after t GNN layers, and ⨁ is any permutation-invariant aggregator, such as sum, max, or average. By definition, we set \({{{{{{{{\bf{h}}}}}}}}}_{u}^{(0)}={{{{{{{{\bf{x}}}}}}}}}_{u}\) , and iterate Eq. ( 2 ) for T steps, where T is a hyperparameter. Then, we let \({{{{{{{\bf{H}}}}}}}}={f}_{{{{{{{{\mathcal{G}}}}}}}}}({{{{{{{\bf{X}}}}}}}},\, {{{{{{{\bf{E}}}}}}}},\, {{{{{{{\bf{g}}}}}}}})={{{{{{{{\bf{H}}}}}}}}}^{(T)}\) be the final node embeddings coming out of the GNN.

It is well known that Eq. ( 2 ) is remarkably general; it can be used to express popular models such as Transformers 19 as a special case, and it has been argued that all discrete deep learning models can be expressed in this form 20 , 21 . This makes GNNs a perfect framework for benchmarking various approaches to modelling player–player interactions in the context of football.

Different choices of ψ , ϕ and ⨁ yield different architectures. In our case, we utilise a message function that factorises into an attentional mechanism, \(a:{{\mathbb{R}}}^{k}\times {{\mathbb{R}}}^{k}\times {{\mathbb{R}}}^{l}\times {{\mathbb{R}}}^{m}\to {\mathbb{R}}\) :

yielding the graph attention network (GAT) architecture 12 . In our work, specifically, we use a two-layer multilayer perceptron for the attentional mechanism, as proposed by GATv2 11 :

where \({{{{{{{{\bf{W}}}}}}}}}_{1},\, {{{{{{{{\bf{W}}}}}}}}}_{2}\in {{\mathbb{R}}}^{k\times h}\) , \({{{{{{{{\bf{W}}}}}}}}}_{e}\in {{\mathbb{R}}}^{l\times h}\) , \({{{{{{{{\bf{W}}}}}}}}}_{g}\in {{\mathbb{R}}}^{m\times h}\) and \({{{{{{{\bf{a}}}}}}}}\in {{\mathbb{R}}}^{h}\) are the learnable parameters of the attentional mechanism, and LeakyReLU is the leaky rectified linear activation function. This mechanism computes coefficients of interaction (a single scalar value) for each pair of connected nodes ( u ,  v ), which are then normalised across all neighbours of u using the \({{{{{{{\rm{softmax}}}}}}}}\) function.

Through early-stage experimentation, we have ascertained that GATs are capable of matching the performance of more generic choices of ψ (such as the MPNN 17 ) while being more scalable. Hence, we focus our study on the GAT model in this work. More details can be found in the subsection “Ablation study” section.

Geometric deep learning

In spite of the power of Eq. ( 2 ), using it in its full generality is often prone to overfitting, given the large number of parameters contained in ψ and ϕ . This problem is exacerbated in the football analytics domain, where gold-standard data is generally very scarce—for example, in the English Premier League, only a few hundred games are played every season.

In order to tackle this issue, we can exploit the immense regularity of data arising from football games. Strategically equivalent game states are also called transpositions, and symmetries such as arriving at the same chess position through different move sequences have been exploited computationally since the 1960s 22 . Similarly, game rotations and reflections may yield equivalent strategic situations 23 . Using the blueprint of geometric deep learning (GDL) 10 , we can design specialised GNN architectures that exploit this regularity.

That is, geometric deep learning is a generic methodology for deriving mathematical constraints on neural networks, such that they will behave predictably when inputs are transformed in certain ways. In several important cases, these constraints can be directly resolved, directly informing neural network architecture design. For a comprehensive example of point clouds under 3D rotational symmetry, see Fuchs et al. 24 .

To elucidate several aspects of the GDL framework on a high level, let us assume that there exists a group of input data transformations (symmetries), \({\mathfrak{G}}\) under which the ground-truth label remains unchanged. Specifically, if we let y ( X ,  E ,  g ) be the label given to the graph featurised with X ,  E ,  g , then for every transformation \({\mathfrak{g}}\in {\mathfrak{G}}\) , the following property holds:

This condition is also referred to as \({\mathfrak{G}}\) -invariance. Here, by \({\mathfrak{g}}({{{{{{{\bf{X}}}}}}}})\) we denote the result of transforming X by \({\mathfrak{g}}\) —a concept also known as a group action. More generally, it is a function of the form \({\mathfrak{G}}\times {{{{{{{\mathcal{S}}}}}}}}\to {{{{{{{\mathcal{S}}}}}}}}\) for some state set \({{{{{{{\mathcal{S}}}}}}}}\) . Note that a single group element, \({\mathfrak{g}}\in {\mathfrak{G}}\) can easily produce different actions on different \({{{{{{{\mathcal{S}}}}}}}}\) —in this case, \({{{{{{{\mathcal{S}}}}}}}}\) could be \({{\mathbb{R}}}^{| {{{{{{{\mathcal{V}}}}}}}}| \times k}\) ( X ), \({{\mathbb{R}}}^{| {{{{{{{\mathcal{V}}}}}}}}| \times | {{{{{{{\mathcal{V}}}}}}}}| \times l}\) ( E ) and \({{\mathbb{R}}}^{m}\) ( g ).

It is worth noting that GNNs may also be derived using a GDL perspective if we set the symmetry group \({\mathfrak{G}}\) to \({S}_{| {{{{{{{\mathcal{V}}}}}}}}}|\) , the permutation group of \(| {{{{{{{\mathcal{V}}}}}}}}|\) objects. Owing to the design of Eq. ( 2 ), its outputs will not be dependent on the exact permutation of nodes in the input graph.

Frame averaging

A simple mechanism to enforce \({\mathfrak{G}}\) -invariance, given any predictor \({f}_{{{{{{{{\mathcal{G}}}}}}}}}({{{{{{{\bf{X}}}}}}}},\, {{{{{{{\bf{E}}}}}}}},\, {{{{{{{\bf{g}}}}}}}})\) , performs frame averaging across all \({\mathfrak{G}}\) -transformed inputs:

This ensures that all \({\mathfrak{G}}\) -transformed versions of a particular input (also known as that input’s orbit) will have exactly the same output, satisfying Eq. ( 5 ). A variant of this approach has also been applied in the AlphaGo architecture 25 to encode symmetries of a Go board.

In our specific implementation, we set \({\mathfrak{G}}={D}_{2}=\{{{{{{{{\rm{id}}}}}}}},\leftrightarrow,\updownarrow,\leftrightarrow \updownarrow \}\) , the dihedral group. Exploiting D 2 -invariance allows us to encode quadrant symmetries. Each element of the D 2 group encodes the presence of vertical or horizontal reflections of the input football pitch. Under these transformations, the pitch is assumed completely symmetric, and hence many predictions, such as which player receives the corner kick, or takes a shot from it, can be safely assumed unchanged. As an example of how to compute transformed features in Eq. ( 6 ), ↔( X ) horizontally reflects all positional features of players in X (e.g. the coordinates of the player), and negates the x -axis component of their velocity.

Group convolutions

While the frame averaging approach of Eq. ( 6 ) is a powerful way to restrict GNNs to respect input symmetries, it arguably misses an opportunity for the different \({\mathfrak{G}}\) -transformed views to interact while their computations are being performed. For small groups such as D 2 , a more fine-grained approach can be assumed, operating over a single GNN layer in Eq. ( 2 ), which we will write shortly as \({{{{{{{{\bf{H}}}}}}}}}^{(t)}={g}_{{{{{{{{\mathcal{G}}}}}}}}}({{{{{{{{\bf{H}}}}}}}}}^{(t-1)},\, {{{{{{{\bf{E}}}}}}}},\, {{{{{{{\bf{g}}}}}}}})\) . The condition that we need a symmetry-respecting GNN layer to satisfy is as follows, for all transformations \({\mathfrak{g}}\in {\mathfrak{G}}\) :

that is, it does not matter if we apply \({\mathfrak{g}}\) it to the input or the output of the function \({g}_{{{{{{{{\mathcal{G}}}}}}}}}\) —the final answer is the same. This condition is also referred to as \({\mathfrak{G}}\) -equivariance, and it has recently proved to be a potent paradigm for developing powerful GNNs over biochemical data 24 , 26 .

To satisfy D 2 -equivariance, we apply the group convolution approach 13 . Therein, views of the input are allowed to directly interact with their \({\mathfrak{G}}\) -transformed variants, in a manner very similar to grid convolutions (which is, indeed, a special case of group convolutions, setting \({\mathfrak{G}}\) to be the translation group). We use \({{{{{{{{\bf{H}}}}}}}}}_{{\mathfrak{g}}}^{(t)}\) to denote the \({\mathfrak{g}}\) -transformed view of the latent node features at layer t . Omitting E and g inputs for brevity, and using our previously designed layer \({g}_{{{{{{{{\mathcal{G}}}}}}}}}\) as a building block, we can perform a group convolution as follows:

Here, ∥ is the concatenation operation, joining the two node feature matrices column-wise; \({{\mathfrak{g}}}^{-1}\) is the inverse transformation to \({\mathfrak{g}}\) (which must exist as \({\mathfrak{G}}\) is a group); and \({{\mathfrak{g}}}^{-1}{\mathfrak{h}}\) is the composition of the two transformations.

Effectively, Eq. ( 8 ) implies our D 2 -equivariant GNN needs to maintain a node feature matrix \({{{{{{{{\bf{H}}}}}}}}}_{{\mathfrak{g}}}^{(t)}\) for every \({\mathfrak{G}}\) -transformation of the current input, and these views are recombined by invoking \({g}_{{{{{{{{\mathcal{G}}}}}}}}}\) on all pairs related together by applying a transformation \({\mathfrak{h}}\) . Note that all reflections are self-inverses, hence, in D 2 , \({\mathfrak{g}}={{\mathfrak{g}}}^{-1}\) .

It is worth noting that both the frame averaging in Eq. ( 6 ) and group convolution in Eq. ( 8 ) are similar in spirit to data augmentation. However, whereas standard data augmentation would only show one view at a time to the model, a frame averaging/group convolution architecture exhaustively generates all views and feeds them to the model all at once. Further, group convolutions allow these views to explicitly interact in a way that does not break symmetries. Here lies the key difference between the two approaches: frame averaging and group convolutions rigorously enforce the symmetries in \({\mathfrak{G}}\) , whereas data augmentation only provides implicit hints to the model about satisfying them. As a consequence of the exhaustive generation, Eqs. ( 6 ) and ( 8 ) are only feasible for small groups like D 2 . For larger groups, approaches like Steerable CNNs 27 may be employed.

Network architectures

While the three benchmark tasks we are performing have minor differences in the global features available to the model, the neural network models designed for them all have the same encoder–decoder architecture. The encoder has the same structure in all tasks, while the decoder model is tailored to produce appropriately shaped outputs for each benchmark task.

Given an input graph, TacticAI’s model first generates all relevant D 2 -transformed versions of it, by appropriately reflecting the player coordinates and velocities. We refer to the original input graph as the identity view, and the remaining three D 2 -transformed graphs as reflected views.

Once the views are prepared, we apply four group convolutional layers (Eq. ( 8 )) with a GATv2 base model (Eqs. ( 3 ) and ( 4 )) as the \({g}_{{{{{{{{\mathcal{G}}}}}}}}}\) function. Specifically, this means that, in Eqs. ( 3 ) and ( 4 ), every instance of \({{{{{{{{\bf{h}}}}}}}}}_{u}^{(t-1)}\) is replaced by the concatenation of \({({{{{{{{{\bf{h}}}}}}}}}_{{\mathfrak{h}}}^{(t-1)})}_{u}\parallel {({{{{{{{{\bf{h}}}}}}}}}_{{{\mathfrak{g}}}^{-1}{\mathfrak{h}}}^{(t-1)})}_{u}\) . Each GATv2 layer has eight attention heads and computes four latent features overall per player. Accordingly, once the four group convolutions are performed, we have a representation of \({{{{{{{\bf{H}}}}}}}}\in {{\mathbb{R}}}^{4\times 22\times 4}\) , where the first dimension corresponds to the four views ( \({{{{{{{{\bf{H}}}}}}}}}_{{{{{{{{\rm{id}}}}}}}}},\, {{{{{{{{\bf{H}}}}}}}}}_{\leftrightarrow },\, {{{{{{{{\bf{H}}}}}}}}}_{\updownarrow },\, {{{{{{{{\bf{H}}}}}}}}}_{\leftrightarrow \updownarrow }\in {{\mathbb{R}}}^{22\times 4}\) ), the second dimension corresponds to the players (eleven on each team), and the third corresponds to the 4-dimensional latent vector for each player node in this particular view. How this representation is used by the decoder depends on the specific downstream task, as we detail below.

For receiver prediction, which is a fully invariant function (i.e. reflections do not change the receiver), we perform simple frame averaging across all views, arriving at

and then learn a node-wise classifier over the rows of \({{{{{{{{\bf{H}}}}}}}}}^{{{{{{{{\rm{node}}}}}}}}}\in {{\mathbb{R}}}^{22\times 4}\) . We further decode H node into a logit vector \({{{{{{{\bf{O}}}}}}}}\in {{\mathbb{R}}}^{22}\) with a linear layer before computing the corresponding softmax cross entropy loss.

For shot prediction, which is once again fully invariant (i.e. reflections do not change the probability of a shot), we can further average the frame-averaged features across all players to get a global graph representation:

and then learn a binary classifier over \({{{{{{{{\bf{h}}}}}}}}}^{{{{{{{{\rm{graph}}}}}}}}}\in {{\mathbb{R}}}^{4}\) . Specifically, we decode the hidden vector into a single logit with a linear layer and compute the sigmoid binary cross-entropy loss with the corresponding label.

For guided generation (position/velocity adjustments), we generate the player positions and velocities with respect to a particular outcome of interest for the human coaches, predicted over the rows of the hidden feature matrix. For example, the model may adjust the defensive setup to decrease the shot probability by the attacking team. The model output is now equivariant rather than invariant—reflecting the pitch appropriately reflects the predicted positions and velocity vectors. As such, we cannot perform frame averaging, and take only the identity view’s features, \({{{{{{{{\bf{H}}}}}}}}}_{{{{{{{{\rm{id}}}}}}}}}\in {{\mathbb{R}}}^{22\times 4}\) . From this latent feature matrix, we can then learn a conditional distribution from each row, which models the positions or velocities of the corresponding player. To do this, we extend the backbone encoder with conditional variational autoencoder (CVAE 28 , 29 ). Specifically, for the u -th row of H id , h u , we first map its latent embedding to the parameters of a two-dimensional Gaussian distribution \({{{{{{{\mathcal{N}}}}}}}}({\mu }_{u}| {\sigma }_{u})\) , and then sample the coordinates and velocities from this distribution. At training time, we can efficiently propagate gradients through this sampling operation using the reparameterisation trick 28 : sample a random value \({\epsilon }_{u} \sim {{{{{{{\mathcal{N}}}}}}}}(0,1)\) for each player from the unit Gaussian distribution, and then treat μ u  +  σ u ϵ u as the sample for this player. In what follows, we omit edge features for brevity. For each corner kick sample X with the corresponding outcome o (e.g. a binary value indicating a shot event), we extend the standard VAE loss 28 , 29 to our case of outcome-conditional guided generation as

where h u is the player embedding corresponding to the u th row of H id , and \({\mathbb{KL}}\) is Kullback–Leibler (KL) divergence. Specifically, the first term is the generation loss between the real player input x u and the reconstructed sample decoded from h u with the decoder p ϕ . Using the KL term, the distribution of the latent embedding h u is regularised towards p ( h u ∣ o ), which is a multivariate Gaussian in our case.

A complete high-level summary of the generic encoder–decoder equivariant architecture employed by TacticAI can be summarised in Supplementary Fig.  2 . In the following section, we will provide empirical evidence for justifying these architectural decisions. This will be done through targeted ablation studies on our predictive benchmarks (receiver prediction and shot prediction).

Ablation study

We leveraged the receiver prediction task as a way to evaluate various base model architectures, and directly quantitatively assess the contributions of geometric deep learning in this context. We already see that the raw corner kick data can be better represented through geometric deep learning, yielding separable clusters in the latent space that could correspond to different attacking or defending tactics (Fig.  2 ). In addition, we hypothesise that these representations can also yield better performance on the task of receiver prediction. Accordingly, we ablate several design choices using deep learning on this task, as illustrated by the following four questions:

Does a factorised graph representation help? To assess this, we compare it against a convolutional neural network (CNN 30 ) baseline, which does not leverage a graph representation.

Does a graph structure help? To assess this, we compare against a Deep Sets 31 baseline, which only models each node in isolation without considering adjacency information—equivalently, setting each neighbourhood \({{{{{{{{\mathcal{N}}}}}}}}}_{u}\) to a singleton set { u }.

Are attentional GNNs a good strategy? To assess this, we compare against a message passing neural network 32 , MPNN baseline, which uses the fully potent GNN layer from Eq. ( 2 ) instead of the GATv2.

Does accounting for symmetries help? To assess this, we compare our geometric GATv2 baseline against one which does not utilise D 2 group convolutions but utilises D 2 frame averaging, and one which does not explicitly utilise any aspect of D 2 symmetries at all.

Each of these models has been trained for a fixed budget of 50,000 training steps. The test top- k receiver prediction accuracies of the trained models are provided in Supplementary Table  2 . As already discussed in the section “Results”, there is a clear advantage to using a full graph structure, as well as directly accounting for reflection symmetry. Further, the usage of the MPNN layer leads to slight overfitting compared to the GATv2, illustrating how attentional GNNs strike a good balance of expressivity and data efficiency for this task. Our analysis highlights the quantitative benefits of both graph representation learning and geometric deep learning for football analytics from tracking data. We also provide a brief ablation study for the shot prediction task in Supplementary Table  3 .

Training details

We train each of TacticAI’s models in isolation, using NVIDIA Tesla P100 GPUs. To minimise overfitting, each model’s learning objective is regularised with an L 2 norm penalty with respect to the network parameters. During training, we use the Adam stochastic gradient descent optimiser 33 over the regularised loss.

All models, including baselines, have been given an equal hyperparameter tuning budget, spanning the number of message passing steps ({1, 2, 4}), initial learning rate ({0.0001, 0.00005}), batch size ({128, 256}) and L 2 regularisation coefficient ({0.01, 0.005, 0.001, 0.0001, 0}). We summarise the chosen hyperparameters of each TacticAI model in Supplementary Table  1 .

Data availability

The data collected in the human experiments in this study have been deposited in the Zenodo database under accession code https://zenodo.org/records/10557063 , and the processed data which is used in the statistical analysis and to generate the relevant figures in the main text are available under the same accession code. The input and output data generated and/or analysed during the current study are protected and are not available due to data privacy laws and licensing restrictions. However, contact details of the input data providers are available from the corresponding authors on reasonable request.

Code availability

All the core models described in this research were built with the Graph Neural Network processors provided by the CLRS Algorithmic Reasoning Benchmark 18 , and their source code is available at https://github.com/google-deepmind/clrs . We are unable to release our code for this work as it was developed in a proprietary context; however, the corresponding authors are open to answer specific questions concerning re-implementations on request. For general data analysis, we used the following freely available packages: numpy v1.25.2 , pandas v1.5.3 , matplotlib v3.6.1 , seaborn v0.12.2 and scipy v1.9.3 . Specifically, the code of the statistical analysis conducted in this study is available at https://zenodo.org/records/10557063 .

The International Football Association Board (IFAB). Laws of the Game (The International Football Association Board, 2023).

Tuyls, K. et al. Game plan: what AI can do for football, and what football can do for AI. J. Artif. Intell. Res. 71 , 41–88 (2021).

Article   Google Scholar  

Goka, R., Moroto, Y., Maeda, K., Ogawa, T. & Haseyama, M. Prediction of shooting events in soccer videos using complete bipartite graphs and players’ spatial–temporal relations. Sensors 23 , 4506 (2023).

Article   ADS   PubMed   PubMed Central   Google Scholar  

Omidshafiei, S. et al. Multiagent off-screen behavior prediction in football. Sci. Rep. 12 , 8638 (2022).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Lang, S., Wild, R., Isenko, A. & Link, D. Predicting the in-game status in soccer with machine learning using spatiotemporal player tracking data. Sci. Rep. 12 , 16291 (2022).

Baccouche, M., Mamalet, F., Wolf, C., Garcia, C. & Baskurt, A. Action classification in soccer videos with long short-term memory recurrent neural networks. In International Conference on Artificial Neural Networks (eds Diamantaras, K., Duch, W. & Iliadis, L. S.) pages 154–159 (Springer, 2010).

Shaw, L. & Gopaladesikan, S. Routine inspection: a playbook for corner kicks. In Machine Learning and Data Mining for Sports Analytics: 7th International Workshop, MLSA 2020, Co-located with ECML/PKDD 2020 , Proceedings, Ghent, Belgium, September 14–18, 2020, Vol . 7 , 3–16 (Springer, 2020).

Araújo, D. & Davids, K. Team synergies in sport: theory and measures. Front. Psychol. 7 , 1449 (2016).

Article   PubMed   PubMed Central   Google Scholar  

Veličković, P. Everything is connected: graph neural networks. Curr. Opin. Struct. Biol. 79 , 102538 (2023).

Article   PubMed   Google Scholar  

Bronstein, M. M., Bruna, J., Cohen, T. & Veličković, P. Geometric deep learning: grids, groups, graphs, geodesics, and gauges. arXiv preprint arXiv:2104.13478 (2021).

Brody, S., Alon, U. & Yahav, E. How attentive are graph attention networks? In International Conference on Learning Representations (ICLR, 2022). https://openreview.net/forum?id=F72ximsx7C1 .

Veličković, P. et al. Graph attention networks. In International Conference on Learning Representations (ICLR, 2018). https://openreview.net/forum?id=rJXMpikCZ .

Cohen, T. & Welling, M. Group equivariant convolutional networks. In International Conference on Machine Learning (eds Balcan, M. F. & Weinberger, K. Q.) 2990–2999 (PMLR, 2016).

Honda, Y., Kawakami, R., Yoshihashi, R., Kato, K. & Naemura, T. Pass receiver prediction in soccer using video and players’ trajectories. In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 3502–3511 (2022). https://ieeexplore.ieee.org/document/9857310 .

Hubáček, O., Sourek, G. & Železný, F. Deep learning from spatial relations for soccer pass prediction. In MLSA@PKDD/ECML (eds Brefeld, U., Davis, J., Van Haaren, J. & Zimmermann, A.) Vol. 11330, (Lecture Notes in Computer Science, Springer, Cham, 2018).

Sanyal, S. Who will receive the ball? Predicting pass recipient in soccer videos. J Visual Commun. Image Represent. 78 , 103190 (2021).

Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O. & Dahl, G. E. Neural message passing for quantum chemistry. In International Conference on Machine Learning (Precup, D. & Teh, Y. W.) 1263–1272 (PMLR, 2017).

Veličković, P. et al. The CLRS algorithmic reasoning benchmark. In International Conference on Machine Learning (eds Chaudhuri, K. et al.) 22084–22102 (PMLR, 2022).

Vaswani, A. et al. Attention is all you need. In Advances in Neural Information Processing Systems (eds Guyon, I. et al.) Vol. 30 (Curran Associates, Inc., 2017).

Veličković, P. Message passing all the way up. In ICLR 2022 Workshop on Geometrical and Topological Representation Learning (GTRL, 2022). https://openreview.net/forum?id=Bc8GiEZkTe5 .

Baranwal, A., Kimon, F. & Aukosh, J. Optimality of message-passing architectures for sparse graphs. In Thirty-seventh Conference on Neural Information Processing Systems (2023). https://papers.nips.cc/paper_files/paper/2023/hash/7e991aa4cd2fdf0014fba2f000f542d0-Abstract-Conference.html .

Greenblatt, R. D., Eastlake III, D. E. & Crocker, S. D. The Greenblatt chess program. In Proc. Fall Joint Computer Conference , 14–16 , 801–810 (Association for Computing Machinery, 1967). https://dl.acm.org/doi/10.1145/1465611.1465715 .

Schijf, M., Allis, L. V. & Uiterwijk, J. W. Proof-number search and transpositions. ICGA J. 17 , 63–74 (1994).

Fuchs, F., Worrall, D., Fischer, V. & Welling, M. SE(3)-transformers: 3D roto-translation equivariant attention networks. Adv. Neural Inf. Process. Syst. 33 , 1970–1981 (2020).

Google Scholar  

Silver, D. et al. Mastering the game of Go with deep neural networks and tree search. Nature 529 , 484–489 (2016).

Article   ADS   CAS   PubMed   Google Scholar  

Satorras, V. G., Hoogeboom, E. & Welling, M. E ( n ) equivariant graph neural networks. In International Conference on Machine Learning (eds Meila, M. & Zhang, T.) 9323–9332 (PMLR, 2021).

Cohen, T. S. & Welling, M. Steerable CNNs. In International Conference on Learning Representations (ICLR, 2017). https://openreview.net/forum?id=rJQKYt5ll .

Kingma, D. P. & Welling, M. Auto-encoding variational Bayes. In 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, April 14–16, 2014, Conference Track Proceedings (ICLR, 2014). https://openreview.net/forum?id=33X9fd2-9FyZd .

Sohn, K., Lee, H. & Yan, X. Learning structured output representation using deep conditional generative models. In Advances in Neural Information Processing Systems (eds Cortes, C, Lawrence, N., Lee, D., Sugiyama, M. & Garnett, R.) Vol. 28 (Curran Associates, Inc., 2015).

Fernández, J. & Bornn, L. Soccermap: a deep learning architecture for visually-interpretable analysis in soccer. In Machine Learning and Knowledge Discovery in Databases. Applied Data Science and Demo Track: European Conference, ECML PKDD 2020, Ghent, Belgium, September 14–18, 2020, Proceedings, Part V (eds Dong, Y., Ifrim, G., Mladenić, D., Saunders, C. & Van Hoecke, S.) 491–506 (Springer, 2021).

Zaheer, M. et al. Deep sets. In Advances in Neural Information Processing Systems Vol. 30 (eds Guyon, I., et al.) (Curran Associates, Inc., 2017).

Gilmer, J., Schoenholz, S. S., Riley, P. F., Vinyals, O. & Dahl, G. E. Neural message passing for quantum chemistry. In Proc. 34th International Conference on Machine Learning , Vol. 70 of Proceedings of Machine Learning Research, 6–11 Aug 2017 (eds Precup, D. & Whye Teh, Y) 1263–1272 (PMLR, 2017).

Kingma, G. E. & Ba, J. Adam: a method for stochastic optimization. In ICLR (Poster) , (eds Bengio, Y. & LeCun, Y.) (International Conference of Learning Representations (ICLR), 2015). https://openreview.net/forum?id=8gmWwjFyLj .

Download references

Acknowledgements

We gratefully acknowledge the support of James French, Timothy Waskett, Hans Leitert and Benjamin Hervey for their extensive efforts in analysing TacticAI’s outputs. Further, we are thankful to Kevin McKee, Sherjil Ozair and Beatrice Bevilacqua for useful technical discussions, and Marc Lanctôt and Satinder Singh for reviewing the paper prior to submission.

Author information

These authors contributed equally: Zhe Wang, Petar Veličković, Daniel Hennes.

Authors and Affiliations

Google DeepMind, 6-8 Handyside Street, London, N1C 4UZ, UK

Zhe Wang, Petar Veličković, Daniel Hennes, Nenad Tomašev, Laurel Prince, Michael Kaisers, Yoram Bachrach, Romuald Elie, Li Kevin Wenliang, Federico Piccinini, Jerome Connor, Yi Yang, Adrià Recasens, Mina Khan, Nathalie Beauguerlange, Pablo Sprechmann, Pol Moreno, Nicolas Heess & Demis Hassabis

Liverpool FC, AXA Training Centre, Simonswood Lane, Kirkby, Liverpool, L33 5XB, UK

William Spearman

Liverpool FC, Kirkby, UK

University of Alberta, Amii, Edmonton, AB, T6G 2E8, Canada

Michael Bowling

Google DeepMind, London, UK

You can also search for this author in PubMed   Google Scholar

Contributions

Z.W., D. Hennes, L.P. and K.T. coordinated and organised the research effort leading to this paper. P.V. and Z.W. developed the core TacticAI models. Z.W., W.S. and I.G. prepared the Premier League corner kick dataset used for training and evaluating these models. P.V., Z.W., D. Hennes and N.T. designed the case study with human experts and Z.W. and P.V. performed the qualitative evaluation and statistical analysis of its outcomes. Z.W., P.V., D. Hennes, N.T., L.P., M. Kaisers, Y.B., R.E., L.K.W., F.P., W.S., I.G., N.H., M.B., D. Hassabis and K.T. contributed to writing the paper and providing feedback on the final manuscript. J.C., Y.Y., A.R., M. Khan, N.B., P.S. and P.M. contributed valuable technical and implementation discussions throughout the work’s development.

Corresponding authors

Correspondence to Zhe Wang , Petar Veličković or Karl Tuyls .

Ethics declarations

Competing interests.

The authors declare no competing interests but the following competing interests: TacticAI was developed during the course of the Authors’ employment at Google DeepMind and Liverpool Football Club, as applicable to each Author.

Peer review

Peer review information.

Nature Communications thanks Rui Luo and the other anonymous reviewer(s) for their contribution to the peer review of this work. A peer review file is available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary information, peer review file, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Wang, Z., Veličković, P., Hennes, D. et al. TacticAI: an AI assistant for football tactics. Nat Commun 15 , 1906 (2024). https://doi.org/10.1038/s41467-024-45965-x

Download citation

Received : 13 October 2023

Accepted : 08 February 2024

Published : 19 March 2024

DOI : https://doi.org/10.1038/s41467-024-45965-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

methods of data analysis for qualitative research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Can J Hosp Pharm
  • v.68(3); May-Jun 2015

Logo of cjhp

Qualitative Research: Data Collection, Analysis, and Management

Introduction.

In an earlier paper, 1 we presented an introduction to using qualitative research methods in pharmacy practice. In this article, we review some principles of the collection, analysis, and management of qualitative data to help pharmacists interested in doing research in their practice to continue their learning in this area. Qualitative research can help researchers to access the thoughts and feelings of research participants, which can enable development of an understanding of the meaning that people ascribe to their experiences. Whereas quantitative research methods can be used to determine how many people undertake particular behaviours, qualitative methods can help researchers to understand how and why such behaviours take place. Within the context of pharmacy practice research, qualitative approaches have been used to examine a diverse array of topics, including the perceptions of key stakeholders regarding prescribing by pharmacists and the postgraduation employment experiences of young pharmacists (see “Further Reading” section at the end of this article).

In the previous paper, 1 we outlined 3 commonly used methodologies: ethnography 2 , grounded theory 3 , and phenomenology. 4 Briefly, ethnography involves researchers using direct observation to study participants in their “real life” environment, sometimes over extended periods. Grounded theory and its later modified versions (e.g., Strauss and Corbin 5 ) use face-to-face interviews and interactions such as focus groups to explore a particular research phenomenon and may help in clarifying a less-well-understood problem, situation, or context. Phenomenology shares some features with grounded theory (such as an exploration of participants’ behaviour) and uses similar techniques to collect data, but it focuses on understanding how human beings experience their world. It gives researchers the opportunity to put themselves in another person’s shoes and to understand the subjective experiences of participants. 6 Some researchers use qualitative methodologies but adopt a different standpoint, and an example of this appears in the work of Thurston and others, 7 discussed later in this paper.

Qualitative work requires reflection on the part of researchers, both before and during the research process, as a way of providing context and understanding for readers. When being reflexive, researchers should not try to simply ignore or avoid their own biases (as this would likely be impossible); instead, reflexivity requires researchers to reflect upon and clearly articulate their position and subjectivities (world view, perspectives, biases), so that readers can better understand the filters through which questions were asked, data were gathered and analyzed, and findings were reported. From this perspective, bias and subjectivity are not inherently negative but they are unavoidable; as a result, it is best that they be articulated up-front in a manner that is clear and coherent for readers.

THE PARTICIPANT’S VIEWPOINT

What qualitative study seeks to convey is why people have thoughts and feelings that might affect the way they behave. Such study may occur in any number of contexts, but here, we focus on pharmacy practice and the way people behave with regard to medicines use (e.g., to understand patients’ reasons for nonadherence with medication therapy or to explore physicians’ resistance to pharmacists’ clinical suggestions). As we suggested in our earlier article, 1 an important point about qualitative research is that there is no attempt to generalize the findings to a wider population. Qualitative research is used to gain insights into people’s feelings and thoughts, which may provide the basis for a future stand-alone qualitative study or may help researchers to map out survey instruments for use in a quantitative study. It is also possible to use different types of research in the same study, an approach known as “mixed methods” research, and further reading on this topic may be found at the end of this paper.

The role of the researcher in qualitative research is to attempt to access the thoughts and feelings of study participants. This is not an easy task, as it involves asking people to talk about things that may be very personal to them. Sometimes the experiences being explored are fresh in the participant’s mind, whereas on other occasions reliving past experiences may be difficult. However the data are being collected, a primary responsibility of the researcher is to safeguard participants and their data. Mechanisms for such safeguarding must be clearly articulated to participants and must be approved by a relevant research ethics review board before the research begins. Researchers and practitioners new to qualitative research should seek advice from an experienced qualitative researcher before embarking on their project.

DATA COLLECTION

Whatever philosophical standpoint the researcher is taking and whatever the data collection method (e.g., focus group, one-to-one interviews), the process will involve the generation of large amounts of data. In addition to the variety of study methodologies available, there are also different ways of making a record of what is said and done during an interview or focus group, such as taking handwritten notes or video-recording. If the researcher is audio- or video-recording data collection, then the recordings must be transcribed verbatim before data analysis can begin. As a rough guide, it can take an experienced researcher/transcriber 8 hours to transcribe one 45-minute audio-recorded interview, a process than will generate 20–30 pages of written dialogue.

Many researchers will also maintain a folder of “field notes” to complement audio-taped interviews. Field notes allow the researcher to maintain and comment upon impressions, environmental contexts, behaviours, and nonverbal cues that may not be adequately captured through the audio-recording; they are typically handwritten in a small notebook at the same time the interview takes place. Field notes can provide important context to the interpretation of audio-taped data and can help remind the researcher of situational factors that may be important during data analysis. Such notes need not be formal, but they should be maintained and secured in a similar manner to audio tapes and transcripts, as they contain sensitive information and are relevant to the research. For more information about collecting qualitative data, please see the “Further Reading” section at the end of this paper.

DATA ANALYSIS AND MANAGEMENT

If, as suggested earlier, doing qualitative research is about putting oneself in another person’s shoes and seeing the world from that person’s perspective, the most important part of data analysis and management is to be true to the participants. It is their voices that the researcher is trying to hear, so that they can be interpreted and reported on for others to read and learn from. To illustrate this point, consider the anonymized transcript excerpt presented in Appendix 1 , which is taken from a research interview conducted by one of the authors (J.S.). We refer to this excerpt throughout the remainder of this paper to illustrate how data can be managed, analyzed, and presented.

Interpretation of Data

Interpretation of the data will depend on the theoretical standpoint taken by researchers. For example, the title of the research report by Thurston and others, 7 “Discordant indigenous and provider frames explain challenges in improving access to arthritis care: a qualitative study using constructivist grounded theory,” indicates at least 2 theoretical standpoints. The first is the culture of the indigenous population of Canada and the place of this population in society, and the second is the social constructivist theory used in the constructivist grounded theory method. With regard to the first standpoint, it can be surmised that, to have decided to conduct the research, the researchers must have felt that there was anecdotal evidence of differences in access to arthritis care for patients from indigenous and non-indigenous backgrounds. With regard to the second standpoint, it can be surmised that the researchers used social constructivist theory because it assumes that behaviour is socially constructed; in other words, people do things because of the expectations of those in their personal world or in the wider society in which they live. (Please see the “Further Reading” section for resources providing more information about social constructivist theory and reflexivity.) Thus, these 2 standpoints (and there may have been others relevant to the research of Thurston and others 7 ) will have affected the way in which these researchers interpreted the experiences of the indigenous population participants and those providing their care. Another standpoint is feminist standpoint theory which, among other things, focuses on marginalized groups in society. Such theories are helpful to researchers, as they enable us to think about things from a different perspective. Being aware of the standpoints you are taking in your own research is one of the foundations of qualitative work. Without such awareness, it is easy to slip into interpreting other people’s narratives from your own viewpoint, rather than that of the participants.

To analyze the example in Appendix 1 , we will adopt a phenomenological approach because we want to understand how the participant experienced the illness and we want to try to see the experience from that person’s perspective. It is important for the researcher to reflect upon and articulate his or her starting point for such analysis; for example, in the example, the coder could reflect upon her own experience as a female of a majority ethnocultural group who has lived within middle class and upper middle class settings. This personal history therefore forms the filter through which the data will be examined. This filter does not diminish the quality or significance of the analysis, since every researcher has his or her own filters; however, by explicitly stating and acknowledging what these filters are, the researcher makes it easer for readers to contextualize the work.

Transcribing and Checking

For the purposes of this paper it is assumed that interviews or focus groups have been audio-recorded. As mentioned above, transcribing is an arduous process, even for the most experienced transcribers, but it must be done to convert the spoken word to the written word to facilitate analysis. For anyone new to conducting qualitative research, it is beneficial to transcribe at least one interview and one focus group. It is only by doing this that researchers realize how difficult the task is, and this realization affects their expectations when asking others to transcribe. If the research project has sufficient funding, then a professional transcriber can be hired to do the work. If this is the case, then it is a good idea to sit down with the transcriber, if possible, and talk through the research and what the participants were talking about. This background knowledge for the transcriber is especially important in research in which people are using jargon or medical terms (as in pharmacy practice). Involving your transcriber in this way makes the work both easier and more rewarding, as he or she will feel part of the team. Transcription editing software is also available, but it is expensive. For example, ELAN (more formally known as EUDICO Linguistic Annotator, developed at the Technical University of Berlin) 8 is a tool that can help keep data organized by linking media and data files (particularly valuable if, for example, video-taping of interviews is complemented by transcriptions). It can also be helpful in searching complex data sets. Products such as ELAN do not actually automatically transcribe interviews or complete analyses, and they do require some time and effort to learn; nonetheless, for some research applications, it may be a valuable to consider such software tools.

All audio recordings should be transcribed verbatim, regardless of how intelligible the transcript may be when it is read back. Lines of text should be numbered. Once the transcription is complete, the researcher should read it while listening to the recording and do the following: correct any spelling or other errors; anonymize the transcript so that the participant cannot be identified from anything that is said (e.g., names, places, significant events); insert notations for pauses, laughter, looks of discomfort; insert any punctuation, such as commas and full stops (periods) (see Appendix 1 for examples of inserted punctuation), and include any other contextual information that might have affected the participant (e.g., temperature or comfort of the room).

Dealing with the transcription of a focus group is slightly more difficult, as multiple voices are involved. One way of transcribing such data is to “tag” each voice (e.g., Voice A, Voice B). In addition, the focus group will usually have 2 facilitators, whose respective roles will help in making sense of the data. While one facilitator guides participants through the topic, the other can make notes about context and group dynamics. More information about group dynamics and focus groups can be found in resources listed in the “Further Reading” section.

Reading between the Lines

During the process outlined above, the researcher can begin to get a feel for the participant’s experience of the phenomenon in question and can start to think about things that could be pursued in subsequent interviews or focus groups (if appropriate). In this way, one participant’s narrative informs the next, and the researcher can continue to interview until nothing new is being heard or, as it says in the text books, “saturation is reached”. While continuing with the processes of coding and theming (described in the next 2 sections), it is important to consider not just what the person is saying but also what they are not saying. For example, is a lengthy pause an indication that the participant is finding the subject difficult, or is the person simply deciding what to say? The aim of the whole process from data collection to presentation is to tell the participants’ stories using exemplars from their own narratives, thus grounding the research findings in the participants’ lived experiences.

Smith 9 suggested a qualitative research method known as interpretative phenomenological analysis, which has 2 basic tenets: first, that it is rooted in phenomenology, attempting to understand the meaning that individuals ascribe to their lived experiences, and second, that the researcher must attempt to interpret this meaning in the context of the research. That the researcher has some knowledge and expertise in the subject of the research means that he or she can have considerable scope in interpreting the participant’s experiences. Larkin and others 10 discussed the importance of not just providing a description of what participants say. Rather, interpretative phenomenological analysis is about getting underneath what a person is saying to try to truly understand the world from his or her perspective.

Once all of the research interviews have been transcribed and checked, it is time to begin coding. Field notes compiled during an interview can be a useful complementary source of information to facilitate this process, as the gap in time between an interview, transcribing, and coding can result in memory bias regarding nonverbal or environmental context issues that may affect interpretation of data.

Coding refers to the identification of topics, issues, similarities, and differences that are revealed through the participants’ narratives and interpreted by the researcher. This process enables the researcher to begin to understand the world from each participant’s perspective. Coding can be done by hand on a hard copy of the transcript, by making notes in the margin or by highlighting and naming sections of text. More commonly, researchers use qualitative research software (e.g., NVivo, QSR International Pty Ltd; www.qsrinternational.com/products_nvivo.aspx ) to help manage their transcriptions. It is advised that researchers undertake a formal course in the use of such software or seek supervision from a researcher experienced in these tools.

Returning to Appendix 1 and reading from lines 8–11, a code for this section might be “diagnosis of mental health condition”, but this would just be a description of what the participant is talking about at that point. If we read a little more deeply, we can ask ourselves how the participant might have come to feel that the doctor assumed he or she was aware of the diagnosis or indeed that they had only just been told the diagnosis. There are a number of pauses in the narrative that might suggest the participant is finding it difficult to recall that experience. Later in the text, the participant says “nobody asked me any questions about my life” (line 19). This could be coded simply as “health care professionals’ consultation skills”, but that would not reflect how the participant must have felt never to be asked anything about his or her personal life, about the participant as a human being. At the end of this excerpt, the participant just trails off, recalling that no-one showed any interest, which makes for very moving reading. For practitioners in pharmacy, it might also be pertinent to explore the participant’s experience of akathisia and why this was left untreated for 20 years.

One of the questions that arises about qualitative research relates to the reliability of the interpretation and representation of the participants’ narratives. There are no statistical tests that can be used to check reliability and validity as there are in quantitative research. However, work by Lincoln and Guba 11 suggests that there are other ways to “establish confidence in the ‘truth’ of the findings” (p. 218). They call this confidence “trustworthiness” and suggest that there are 4 criteria of trustworthiness: credibility (confidence in the “truth” of the findings), transferability (showing that the findings have applicability in other contexts), dependability (showing that the findings are consistent and could be repeated), and confirmability (the extent to which the findings of a study are shaped by the respondents and not researcher bias, motivation, or interest).

One way of establishing the “credibility” of the coding is to ask another researcher to code the same transcript and then to discuss any similarities and differences in the 2 resulting sets of codes. This simple act can result in revisions to the codes and can help to clarify and confirm the research findings.

Theming refers to the drawing together of codes from one or more transcripts to present the findings of qualitative research in a coherent and meaningful way. For example, there may be examples across participants’ narratives of the way in which they were treated in hospital, such as “not being listened to” or “lack of interest in personal experiences” (see Appendix 1 ). These may be drawn together as a theme running through the narratives that could be named “the patient’s experience of hospital care”. The importance of going through this process is that at its conclusion, it will be possible to present the data from the interviews using quotations from the individual transcripts to illustrate the source of the researchers’ interpretations. Thus, when the findings are organized for presentation, each theme can become the heading of a section in the report or presentation. Underneath each theme will be the codes, examples from the transcripts, and the researcher’s own interpretation of what the themes mean. Implications for real life (e.g., the treatment of people with chronic mental health problems) should also be given.

DATA SYNTHESIS

In this final section of this paper, we describe some ways of drawing together or “synthesizing” research findings to represent, as faithfully as possible, the meaning that participants ascribe to their life experiences. This synthesis is the aim of the final stage of qualitative research. For most readers, the synthesis of data presented by the researcher is of crucial significance—this is usually where “the story” of the participants can be distilled, summarized, and told in a manner that is both respectful to those participants and meaningful to readers. There are a number of ways in which researchers can synthesize and present their findings, but any conclusions drawn by the researchers must be supported by direct quotations from the participants. In this way, it is made clear to the reader that the themes under discussion have emerged from the participants’ interviews and not the mind of the researcher. The work of Latif and others 12 gives an example of how qualitative research findings might be presented.

Planning and Writing the Report

As has been suggested above, if researchers code and theme their material appropriately, they will naturally find the headings for sections of their report. Qualitative researchers tend to report “findings” rather than “results”, as the latter term typically implies that the data have come from a quantitative source. The final presentation of the research will usually be in the form of a report or a paper and so should follow accepted academic guidelines. In particular, the article should begin with an introduction, including a literature review and rationale for the research. There should be a section on the chosen methodology and a brief discussion about why qualitative methodology was most appropriate for the study question and why one particular methodology (e.g., interpretative phenomenological analysis rather than grounded theory) was selected to guide the research. The method itself should then be described, including ethics approval, choice of participants, mode of recruitment, and method of data collection (e.g., semistructured interviews or focus groups), followed by the research findings, which will be the main body of the report or paper. The findings should be written as if a story is being told; as such, it is not necessary to have a lengthy discussion section at the end. This is because much of the discussion will take place around the participants’ quotes, such that all that is needed to close the report or paper is a summary, limitations of the research, and the implications that the research has for practice. As stated earlier, it is not the intention of qualitative research to allow the findings to be generalized, and therefore this is not, in itself, a limitation.

Planning out the way that findings are to be presented is helpful. It is useful to insert the headings of the sections (the themes) and then make a note of the codes that exemplify the thoughts and feelings of your participants. It is generally advisable to put in the quotations that you want to use for each theme, using each quotation only once. After all this is done, the telling of the story can begin as you give your voice to the experiences of the participants, writing around their quotations. Do not be afraid to draw assumptions from the participants’ narratives, as this is necessary to give an in-depth account of the phenomena in question. Discuss these assumptions, drawing on your participants’ words to support you as you move from one code to another and from one theme to the next. Finally, as appropriate, it is possible to include examples from literature or policy documents that add support for your findings. As an exercise, you may wish to code and theme the sample excerpt in Appendix 1 and tell the participant’s story in your own way. Further reading about “doing” qualitative research can be found at the end of this paper.

CONCLUSIONS

Qualitative research can help researchers to access the thoughts and feelings of research participants, which can enable development of an understanding of the meaning that people ascribe to their experiences. It can be used in pharmacy practice research to explore how patients feel about their health and their treatment. Qualitative research has been used by pharmacists to explore a variety of questions and problems (see the “Further Reading” section for examples). An understanding of these issues can help pharmacists and other health care professionals to tailor health care to match the individual needs of patients and to develop a concordant relationship. Doing qualitative research is not easy and may require a complete rethink of how research is conducted, particularly for researchers who are more familiar with quantitative approaches. There are many ways of conducting qualitative research, and this paper has covered some of the practical issues regarding data collection, analysis, and management. Further reading around the subject will be essential to truly understand this method of accessing peoples’ thoughts and feelings to enable researchers to tell participants’ stories.

Appendix 1. Excerpt from a sample transcript

The participant (age late 50s) had suffered from a chronic mental health illness for 30 years. The participant had become a “revolving door patient,” someone who is frequently in and out of hospital. As the participant talked about past experiences, the researcher asked:

  • What was treatment like 30 years ago?
  • Umm—well it was pretty much they could do what they wanted with you because I was put into the er, the er kind of system er, I was just on
  • endless section threes.
  • Really…
  • But what I didn’t realize until later was that if you haven’t actually posed a threat to someone or yourself they can’t really do that but I didn’t know
  • that. So wh-when I first went into hospital they put me on the forensic ward ’cause they said, “We don’t think you’ll stay here we think you’ll just
  • run-run away.” So they put me then onto the acute admissions ward and – er – I can remember one of the first things I recall when I got onto that
  • ward was sitting down with a er a Dr XXX. He had a book this thick [gestures] and on each page it was like three questions and he went through
  • all these questions and I answered all these questions. So we’re there for I don’t maybe two hours doing all that and he asked me he said “well
  • when did somebody tell you then that you have schizophrenia” I said “well nobody’s told me that” so he seemed very surprised but nobody had
  • actually [pause] whe-when I first went up there under police escort erm the senior kind of consultants people I’d been to where I was staying and
  • ermm so er [pause] I . . . the, I can remember the very first night that I was there and given this injection in this muscle here [gestures] and just
  • having dreadful side effects the next day I woke up [pause]
  • . . . and I suffered that akathesia I swear to you, every minute of every day for about 20 years.
  • Oh how awful.
  • And that side of it just makes life impossible so the care on the wards [pause] umm I don’t know it’s kind of, it’s kind of hard to put into words
  • [pause]. Because I’m not saying they were sort of like not friendly or interested but then nobody ever seemed to want to talk about your life [pause]
  • nobody asked me any questions about my life. The only questions that came into was they asked me if I’d be a volunteer for these student exams
  • and things and I said “yeah” so all the questions were like “oh what jobs have you done,” er about your relationships and things and er but
  • nobody actually sat down and had a talk and showed some interest in you as a person you were just there basically [pause] um labelled and you
  • know there was there was [pause] but umm [pause] yeah . . .

This article is the 10th in the CJHP Research Primer Series, an initiative of the CJHP Editorial Board and the CSHP Research Committee. The planned 2-year series is intended to appeal to relatively inexperienced researchers, with the goal of building research capacity among practising pharmacists. The articles, presenting simple but rigorous guidance to encourage and support novice researchers, are being solicited from authors with appropriate expertise.

Previous articles in this series:

Bond CM. The research jigsaw: how to get started. Can J Hosp Pharm . 2014;67(1):28–30.

Tully MP. Research: articulating questions, generating hypotheses, and choosing study designs. Can J Hosp Pharm . 2014;67(1):31–4.

Loewen P. Ethical issues in pharmacy practice research: an introductory guide. Can J Hosp Pharm. 2014;67(2):133–7.

Tsuyuki RT. Designing pharmacy practice research trials. Can J Hosp Pharm . 2014;67(3):226–9.

Bresee LC. An introduction to developing surveys for pharmacy practice research. Can J Hosp Pharm . 2014;67(4):286–91.

Gamble JM. An introduction to the fundamentals of cohort and case–control studies. Can J Hosp Pharm . 2014;67(5):366–72.

Austin Z, Sutton J. Qualitative research: getting started. C an J Hosp Pharm . 2014;67(6):436–40.

Houle S. An introduction to the fundamentals of randomized controlled trials in pharmacy research. Can J Hosp Pharm . 2014; 68(1):28–32.

Charrois TL. Systematic reviews: What do you need to know to get started? Can J Hosp Pharm . 2014;68(2):144–8.

Competing interests: None declared.

Further Reading

Examples of qualitative research in pharmacy practice.

  • Farrell B, Pottie K, Woodend K, Yao V, Dolovich L, Kennie N, et al. Shifts in expectations: evaluating physicians’ perceptions as pharmacists integrated into family practice. J Interprof Care. 2010; 24 (1):80–9. [ PubMed ] [ Google Scholar ]
  • Gregory P, Austin Z. Postgraduation employment experiences of new pharmacists in Ontario in 2012–2013. Can Pharm J. 2014; 147 (5):290–9. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Marks PZ, Jennnings B, Farrell B, Kennie-Kaulbach N, Jorgenson D, Pearson-Sharpe J, et al. “I gained a skill and a change in attitude”: a case study describing how an online continuing professional education course for pharmacists supported achievement of its transfer to practice outcomes. Can J Univ Contin Educ. 2014; 40 (2):1–18. [ Google Scholar ]
  • Nair KM, Dolovich L, Brazil K, Raina P. It’s all about relationships: a qualitative study of health researchers’ perspectives on interdisciplinary research. BMC Health Serv Res. 2008; 8 :110. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Pojskic N, MacKeigan L, Boon H, Austin Z. Initial perceptions of key stakeholders in Ontario regarding independent prescriptive authority for pharmacists. Res Soc Adm Pharm. 2014; 10 (2):341–54. [ PubMed ] [ Google Scholar ]

Qualitative Research in General

  • Breakwell GM, Hammond S, Fife-Schaw C. Research methods in psychology. Thousand Oaks (CA): Sage Publications; 1995. [ Google Scholar ]
  • Given LM. 100 questions (and answers) about qualitative research. Thousand Oaks (CA): Sage Publications; 2015. [ Google Scholar ]
  • Miles B, Huberman AM. Qualitative data analysis. Thousand Oaks (CA): Sage Publications; 2009. [ Google Scholar ]
  • Patton M. Qualitative research and evaluation methods. Thousand Oaks (CA): Sage Publications; 2002. [ Google Scholar ]
  • Willig C. Introducing qualitative research in psychology. Buckingham (UK): Open University Press; 2001. [ Google Scholar ]

Group Dynamics in Focus Groups

  • Farnsworth J, Boon B. Analysing group dynamics within the focus group. Qual Res. 2010; 10 (5):605–24. [ Google Scholar ]

Social Constructivism

  • Social constructivism. Berkeley (CA): University of California, Berkeley, Berkeley Graduate Division, Graduate Student Instruction Teaching & Resource Center; [cited 2015 June 4]. Available from: http://gsi.berkeley.edu/gsi-guide-contents/learning-theory-research/social-constructivism/ [ Google Scholar ]

Mixed Methods

  • Creswell J. Research design: qualitative, quantitative, and mixed methods approaches. Thousand Oaks (CA): Sage Publications; 2009. [ Google Scholar ]

Collecting Qualitative Data

  • Arksey H, Knight P. Interviewing for social scientists: an introductory resource with examples. Thousand Oaks (CA): Sage Publications; 1999. [ Google Scholar ]
  • Guest G, Namey EE, Mitchel ML. Collecting qualitative data: a field manual for applied research. Thousand Oaks (CA): Sage Publications; 2013. [ Google Scholar ]

Constructivist Grounded Theory

  • Charmaz K. Grounded theory: objectivist and constructivist methods. In: Denzin N, Lincoln Y, editors. Handbook of qualitative research. 2nd ed. Thousand Oaks (CA): Sage Publications; 2000. pp. 509–35. [ Google Scholar ]

VIDEO

  1. Data Analysis in Research

  2. Qualitative Data Analysis Procedures

  3. Qualitative data analysis in NVivo

  4. Data analysis strategies, presentation and interpretation

  5. QDA Software

  6. Qualitative data it's interpretation and analysis

COMMENTS

  1. Qualitative Data Analysis Methods: Top 6 + Examples

    QDA Method #1: Qualitative Content Analysis. Content analysis is possibly the most common and straightforward QDA method. At the simplest level, content analysis is used to evaluate patterns within a piece of content (for example, words, phrases or images) or across multiple pieces of content or sources of communication. For example, a collection of newspaper articles or political speeches.

  2. Qualitative Data Analysis: What is it, Methods + Examples

    Qualitative data analysis is a systematic process of examining non-numerical data to extract meaning, patterns, and insights. In contrast to quantitative analysis, which focuses on numbers and statistical metrics, the qualitative study focuses on the qualitative aspects of data, such as text, images, audio, and videos.

  3. Qualitative Data Analysis: Step-by-Step Guide (Manual vs ...

    Qualitative Data Analysis methods. Once all the data has been captured, there are a variety of analysis techniques available and the choice is determined by your specific research objectives and the kind of data you've gathered. Common qualitative data analysis methods include: Content Analysis. This is a popular approach to qualitative data ...

  4. Learning to Do Qualitative Data Analysis: A Starting Point

    For many researchers unfamiliar with qualitative research, determining how to conduct qualitative analyses is often quite challenging. Part of this challenge is due to the seemingly limitless approaches that a qualitative researcher might leverage, as well as simply learning to think like a qualitative researcher when analyzing data. From framework analysis (Ritchie & Spencer, 1994) to content ...

  5. What Is Qualitative Research?

    Qualitative research methods. Each of the research approaches involve using one or more data collection methods.These are some of the most common qualitative methods: Observations: recording what you have seen, heard, or encountered in detailed field notes. Interviews: personally asking people questions in one-on-one conversations. Focus groups: asking questions and generating discussion among ...

  6. PDF The SAGE Handbook of Qualitative Data Analysis

    Aims of Qualitative Data Analysis The analysis of qualitative data can have several aims. The first aim may be to describe a phenomenon in some or greater detail. The phenomenon can be the subjective experi-ences of a specific individual or group (e.g. the way people continue to live after a fatal diagnosis). This can focus on the case (indi-

  7. (PDF) Data Analysis Methods for Qualitative Research: Managing the

    Thematic analysis is a method of data analysis in qualitative research that most researchers use, and it is flexible because it can be applied and utilized broadly across various epistemologies ...

  8. Sage Research Methods

    The wide range of approaches to data analysis in qualitative research can seem daunting even for experienced researchers. This handbook is the first to provide a state-of-the art overview of the whole field of QDA; from general analytic strategies used in qualitative research, to approaches specific to particular types of qualitative data, including talk, text, sounds, images and virtual data.

  9. Qualitative Data Analysis

    5. Grounded theory. This method of qualitative data analysis starts with an analysis of a single case to formulate a theory. Then, additional cases are examined to see if they contribute to the theory. Qualitative data analysis can be conducted through the following three steps: Step 1: Developing and Applying Codes.

  10. Qualitative Data Analysis Strategies

    This chapter provides an overview of selected qualitative data analysis strategies with a particular focus on codes and coding. Preparatory strategies for a qualitative research study and data management are first outlined. Six coding methods are then profiled using comparable interview data: process coding, in vivo coding, descriptive coding ...

  11. Qualitative Research

    Qualitative Research. Qualitative research is a type of research methodology that focuses on exploring and understanding people's beliefs, attitudes, behaviors, and experiences through the collection and analysis of non-numerical data. It seeks to answer research questions through the examination of subjective data, such as interviews, focus ...

  12. How to use and assess qualitative research methods

    Data collection. The methods of qualitative data collection most commonly used in health research are document study, observations, semi-structured interviews and focus groups [1, 14, 16, 17]. Document study. Document study (also called document analysis) refers to the review by the researcher of written materials . These can include personal ...

  13. Data Analysis in Qualitative Research: A Brief Guide to Using Nvivo

    In some cases, qualitative data can also include pictorial display, audio or video clips (e.g. audio and visual recordings of patients, radiology film, and surgery videos), or other multimedia materials. Data analysis is the part of qualitative research that most distinctively differentiates from quantitative research methods.

  14. 5 Qualitative Data Analysis Methods to Reveal User Insights

    5 qualitative data analysis methods explained. Qualitative data analysis is the process of organizing, analyzing, and interpreting qualitative research data—non-numeric, conceptual information, and user feedback—to capture themes and patterns, answer research questions, and identify actions to improve your product or website.Step 1 in the research process (after planning) is qualitative ...

  15. Qualitative Data

    Analyze data: Analyze the data using appropriate qualitative data analysis methods, such as thematic analysis or content analysis. Interpret findings: Interpret the findings of the data analysis in the context of the research question and the relevant literature. This may involve developing new theories or frameworks, or validating existing ...

  16. Qualitative vs Quantitative Research Methods & Data Analysis

    The main difference between quantitative and qualitative research is the type of data they collect and analyze. Quantitative research collects numerical data and analyzes it using statistical methods. The aim is to produce objective, empirical data that can be measured and expressed in numerical terms.

  17. (PDF) Analysing data in qualitative research

    QSR International's NVivo 12 qualitative data analysis software was used to examine text, compare transcriptions, code and identify subsidiary themes, present data. Specifically, using a ...

  18. How to use and assess qualitative research methods

    For data analysis, field-notes and audio-recordings are transcribed into protocols and transcripts, and coded using qualitative data management software. Criteria such as checklists, reflexivity, sampling strategies, piloting, co-coding, member-checking and stakeholder involvement can be used to enhance and assess the quality of the research ...

  19. Qualitative Study

    Qualitative research is a type of research that explores and provides deeper insights into real-world problems.[1] Instead of collecting numerical data points or intervene or introduce treatments just like in quantitative research, qualitative research helps generate hypotheses as well as further investigate and understand quantitative data. Qualitative research gathers participants ...

  20. PDF 15 Methods of Data Analysis in Qualitative Research

    15 Methods of Data Analysis in Qualitative Research. Compiled by Donald Ratcliff. 1. Typology - a classification system, taken from patterns, themes, or other kinds of groups of data. (Patton pp. 393,398) John Lofland & Lyn Lofland. Ideally, categories should be mutually exclusive and exhaustive if possible, often they aren't.

  21. Introduction to qualitative research methods

    INTRODUCTION. Qualitative research methods refer to techniques of investigation that rely on nonstatistical and nonnumerical methods of data collection, analysis, and evidence production. Qualitative research techniques provide a lens for learning about nonquantifiable phenomena such as people's experiences, languages, histories, and cultures.

  22. Barriers and enabling factors for utilizing physical rehabilitation

    Methods. This qualitative study was conducted in Iran between January and March 2023. Participants were selected through convenient and snowball sampling. Individual, semi-structured interviews were carried out both in face-to-face and online formats. Data analysis occurred concurrently with data collection, using the directed content analysis ...

  23. Exploring primary care physicians' challenges in using home blood

    ObjectiveHypertension guidelines recommend using home blood pressure (HBP) to diagnose, treat and monitor hypertension. This study aimed to explore the challenges primary care physicians (PCPs) face in using HBP to manage patients with hypertension.MethodA qualitative study was conducted in 2022 at five primary care clinics in Singapore. An experienced qualitative researcher conducted ...

  24. Qualitative Research in Healthcare: Data Analysis

    The 6-step content analysis research process proposed by Krippendorff [ 66] is as follows: Step 1, unitizing, is a process in which the researcher selects a scheme for classifying the data of interest for data collection and analysis. Step 2, sampling, involves selecting a conceptually representative sample population.

  25. TacticAI: an AI assistant for football tactics

    In modern football games, data-driven analysis serves as a key driver in determining tactics. Wang, Veličković, Hennes et al. develop a geometric deep learning algorithm, named TacticAI, to ...

  26. Qualitative Research: Data Collection, Analysis, and Management

    There are many ways of conducting qualitative research, and this paper has covered some of the practical issues regarding data collection, analysis, and management. Further reading around the subject will be essential to truly understand this method of accessing peoples' thoughts and feelings to enable researchers to tell participants' stories.